Application hosts must integrate the complexity of artificial intelligence

Application hosts must integrate the complexity of artificial intelligence


David Szegedi - Red Hat FranceContrary to what one might believe, artificial intelligence is not a simple entity, but rather a multiple and complex technology, with very varied use cases and user profiles – so that it would even be better talk about “artificial intelligence” in the plural. To manage this complexity and meet the expectations of companies wishing to incorporate AI functionalities into their applications, appropriate tools are needed, and more specifically a framework capable of aggregating all types of use cases within a single platform.

Understanding the complexity of AI

AI, which is being talked about in every way today, is actually divided into two approaches: generative AI, called “general public AI”, which everyone has known about since the creation of ChatGPT a year ago. little more than a year, and which consists in particular of generating images and videos and developing chatbots; and predictive AI, which has been around much longer, and involves analyzing data to identify recurring patterns.

The renewed general interest in AI, driven by generative use cases, has pushed IT publishers to turn again to this technology, to be able to respond to new emerging uses. Because the added value of the uses of predictive AI – such as improving the fraud detection capabilities of a bank – can in certain cases benefit from the contributions of generative AI – such as perfecting a chatbot for the customers of the same bank.

Relying on integrated platforms that make it possible to manage both types of AI, independently, therefore constitutes a leading strategic choice for companies, all sectors combined, who want to remain innovative. We must also not forget the security and traceability aspects; in fact, AI is not just a technological issue, but also has an aspect linked to governance and compliance.

You must be able to know how intelligence was trained, or remain attentive to potential biases that may appear. Here, open source proves to be a strategic choice to provide the necessary transparency, which will allow companies to guarantee the technology on which their AI models are built. Currently, none of the leaders in generative AI provide this transparency, and each remains a real black box for users.

There are many applications

AI needs so-called “fresh” data, that is to say recent data, in order to minimize the risk of bias or false results (hallucinations). This explains why iterations of AI models are very regular, and sometimes occur within a few months – whereas life cycles in the era of data warehouses and Big Data were on the order of several years – with the need to reinject new data when mathematical modeling has reached its limit.

For example, in the case of an autonomous driving model trained in a given urban environment, which evolves very quickly (works, signage, store windows, etc.), a car can quickly find itself lost if there is no had a recent iteration of its AI model.

Faced with this unprecedented acceleration in the pace of production, most users, regardless of their level of knowledge, feel lost. The need for computing power and data is also increasingly pressing, to allow AI to approach as closely as possible the granularity of human intelligence, and to be able to evaluate each situation as a human would. .

But the best illustration of the complexity of AI remains the diversity of its use cases. Historically, the first sector to use it is banking and insurance, to cope with the explosion in volumes of dematerialized payments, and improve fraud detection, thanks to data sampling and rapid identification of patterns. of fraud. This sector has also taken advantage of AI to improve customer satisfaction with a simplified, faster journey, assisted by chatbots.

In the health sector, AI has made enormous progress in interpreting patient X-rays, before entrusting the diagnosis stage to a human health professional. The patient journey is thus optimized, in terms of efficiency and time. Likewise, this gain in speed has also benefited the creation of drugs and vaccines. Without AI, genomic research could not achieve the current speed of results.

Finally, the transport sector also brings together two major use cases, starting with autonomous driving, which depends on progress made in “functional safety”, that is to say the detection of risks around vehicles. This is an extremely important project, which combines AI with edge technology, as close as possible to the user, thanks to systems embedded in vehicles. And on the other hand, the generalization of the automation of freight trains, which could, ultimately, also be applied to passenger trains.

Diversify the application cases of platforms

To meet the expectations of companies wanting modern applications integrating AI, application hosting platforms must know how to address all types of user profiles, from the data scientist specializing in mathematics, to the MLOps in charge of the aspect operational and lifecycle of AI models, through the developer. We must now take into account the contributions of generative AI, for example by generating content based on a prompt, to meet the needs of users responsible for administration or content generation tasks.

For the administration, this may involve developing advice for users based on good practices observed on equivalent configurations. For content, this may concern assistance with code creation, drawing inspiration from the software heritage available from publishers or directly from customers.

In the case where customers design their own applications themselves, it is necessary to create, deploy and operate AI models, and in particular Machine Learning, with their life cycle. However, platforms that succeed in addressing more than one type of profile are still rare.

The AI ​​market is still far from maturity, as evidenced by the current lack of standardization in deployments. On the contrary, it is in its ascending phase, the one during which it is appropriate to structure itself with the right editors, to achieve a standardized and framed AI, which will allow its added value to be increased tenfold. To do this, it is essential to overcome the complexity of AI and the diversity of its technological profiles by relying on a suitable framework.

David SzegediField CTO France, Red Hat

Expert opinions are published under the full responsibility of their authors and do not commit the editorial staff in any way.

Selected for you

Estée Lauder creates a laboratory dedicated to AI with the help of Microsoft

Leave a Comment

Your email address will not be published. Required fields are marked *