[ad_1]
in the past For six months, we’ve seen some amazing advances in AI. The release of Stable Diffusion changed the artist world forever, and ChatGPT-3 rocked the internet with his ability to write songs, simulate research papers, and provide deep and intelligent-looking answers to Google questions.
These advances in generative AI provide further evidence that we are on the precipice of an AI revolution.
However, most of these generative AI models are foundational models: high-capacity, unsupervised learning systems that train large amounts of data and take millions of dollars of processing power to run it. Currently, only well-funded institutions with large amounts of GPU power can build these models.
Most companies developing application-layer AI still rely on supervised learning using large amounts of training data, which is driving the technology’s widespread adoption. Despite the impressive capabilities of the foundational models, we are still in the early days of the AI revolution and several bottlenecks have blocked the spread of application-layer AI.
Underlying the well-known data labeling problem are additional data bottlenecks that hinder late-stage AI development and deployment into production environments.
These problems come despite early promises and a flood of investment in technologies such as self-driving cars, which are only a year away from 2014.
These interesting conceptual models work well on scaled data sets in research environments, but struggle to predict accurately when released in the real world. The main problem is that the technology struggles to meet the high performance levels required in high-stock production environments and doesn’t hit important parameters for durability, reliability and maintainability.
For example, these models often cannot handle external and edge issues, so self-driving cars will mistake the reflections of bicycles on bicycles. They are not reliable or strong, so the robot barista will make a perfect cappuccino two out of five times, but will spill the cup three times.
As a result, the AIA product gap, the gap between “that’s good” and “useful,” is bigger and scarier than ML engineers originally expected.
Conversely, the best systems also have human interaction.
Fortunately, as more and more ML engineers adopt a data-centric approach to AI development, the implementation of active learning strategies is growing. The most advanced companies use this technology to jump the AI production gap and build models that can run faster in the wild.
What is active learning?
An active learning model makes training an iterative process. The model is trained on an initial subset selected from a large data set. It then tries to make predictions on the remaining unlabeled data based on what it has learned. ML engineers evaluate how confident the model is in its predictions and can determine the amount of performance benefit added by annotating one of the unlabeled samples using different acquisition functions.
By expressing the uncertainty in its predictions, the model decides on its own what additional information is most useful for training. In doing so, it asks the descriptors to provide more examples for that specific data only, so that it can further train on that subset in the next training round. Think of it as asking a student to find out where their knowledge gaps are. Once you know what problems are missing, you can provide textbooks, presentations, and other materials so that they can target their learning to better understand that specific aspect of the subject.
In active learning, training a model moves from a linear process to a circular, dynamic feedback loop.
Why advanced companies should be ready to use active learning
Active learning is fundamental to closing the prototype-production gap and increasing model reliability.
It is a common mistake to think of AI systems as fixed software, but these systems must constantly learn and evolve. If not, they will make the same mistakes over and over again, or when they are released into the wild, they will encounter new situations, make new mistakes and have no chance to learn from them.
[ad_2]
Source link