Whether or not this is truly an “iPhone moment” or a serious threat to Google search is unclear at the moment – although it will drive a change in consumer behavior and expectations, the first change will be companies pushing for smarter tools. On large-scale language models (LLMs) to learn from their own data and services.
And this, ultimately, is the key – the importance and value of generative AI today is not really a question of societal or industry-wide change. Rather, it is a question of how this technology can open up new ways of interacting with large and untapped data and information.
OpenAI is clearly aligned with this reality and recognizes the business opportunity: although the list of organizations participating in the ChatGPT plugin initiative is small, OpenAI has opened a waiting list where companies can sign up to receive plugins. In the coming months, we will no doubt see many new products and interfaces powered by OpenAI’s generative AI systems.
Although it is easy to fall into the trap of seeing OpenAI as the sole gatekeeper of this technology – and ChatGPT as such of Go for a generative AI tool – this is fortunately far from the case. You don’t need to be on a waiting list or make a lot of money to submit to Sam Altman. Instead, LMMs can be self-hosted.
This is something we are starting to see at Thinkworks. In the latest volume of The Technology Radar—our thought guide to the techniques, platforms, languages, and tools used across the industry today—we’ve identified a number of related tools and practices that point to the future of generative AI as cool and unique as opposed to what mainstream talk would have you believe.
Unfortunately, we don’t think this is something many business and technology leaders have yet recognized. The industry’s focus has been on OpenAI, which means that the ecosystem of tools beyond it — exemplified by projects like GPT-J and GPT Neo — and more DIY optimization approaches have so far been somewhat neglected. This is a shame because these options offer many benefits. For example, a self-powered LLM sidesteps the real privacy issues that can come from linking data to an OpenAI product. In other words, if you want to deploy LLM to your organization’s data, you can do it yourself. No need to go anywhere else. Given the concerns for both industry and the public when it comes to privacy and data management, it is prudent to exercise caution rather than be fooled by big tech marketing efforts.
A related trend we have observed is domain-specific language models. Although these are just beginning to emerge, well-tuned publicly available, general-purpose LMSs can form the basis for developing incredibly useful data acquisition tools with your own data. These can be used for example in product information, content or internal documents. In the coming months, we’ll see more examples of these look at things like helping customer support staff and enabling content creators to experiment more freely and efficiently.
If generative AI is more domain-specific, the question remains as to what this actually means for humans. However, I would argue that this medium-term vision of AI’s future is far less threatening and frightening than many of today’s doomsday visions. Over time, people will need to develop a more subtle relationship with the technology, better bridging the gap between generative AI and more specialized and nuanced data sets. It loses its mystique as if it seems to know everything, but instead becomes embedded in our context.