“Prompting” will become mostly obsolete in the near future, used only in specific circumstances. Prompts are a way for the user to convey to the AI model what he or she wants by sending it direct instructions or samples. Text and image prompting are powerful and wonderful, but they are a bit like trying to fly a fighter jet while sitting on the roof — you’re trying to guide a very powerful and complex machine without any visibility into the internal controls. Large Language Models (LLMs) are phenomenally complex mechanisms that we do not fully understand, and right now we’re trying to steer them without much leverage, and without much access to the internal workings. Today’s prompting methods fail to harness the full potential of AI models, and their novelty obscures their awkwardness. However, as we learn more about how today’s models work, as new types of models in new mediums like video or 3D rendering are released, and as researchers and developers gain more direct access to model weights and models become smaller, more powerful interaction techniques will emerge.

Active and Contextual Triggering

We can divide these new interactions into “active triggering” and “contextual triggering.” Active triggering involves deliberately interacting with a model, knowing that you’re engaging with it and seeking a specific response, like clicking a button, or speaking a certain phrase, like “OK Google.” Contextual triggering doesn’t need a deliberate action by the user; the model anticipates when it should engage.

We’re already seeing active triggering move beyond prompts in the vision videos coming out for Google Workspace and how they plan to incorporate generative AI. Content is created in one place, then a button triggers an action or a decision, such as creating slides. With near-future models, interacting with a model won’t simply involve typing text or clicking a button; it will encompass a variety of methods, such as highlighting part of a page, nudging, expanding, sliding — any of the interactions we expect from traditional software, and more, but with generative elements. Consider something like this: You are writing in a word processor, and at the bottom there are sliders for “Earnest < → Funny” “Formal < → Personal” etc. If you drag the slider, the entire piece is adjusted in real time. Or consider something like this: You’re reading a document, highlighting the parts you like, and the entire piece is updated to make the writing more like the style of the highlighted parts. These are examples of the kinds of active generation I expect in the near future that don’t involve any sort of direct text prompting or chat-like element.

To see some really innovative thinking on how we can interact with generative AI, check out https://puppets.app/. It combines hand gestures, voice, text, and imagery. The use case, creating puppet shows, is unusual in today’s crowded race to create the fastest and best sales emails. It is still in its infancy, but it is a signal of the multi-modal, fluid, and embodied interactions that will soon mature, for use cases far beyond what we currently see today.

In many cases, the models will be “contextually triggered.” This means that without being told, the model will know when it should activate, simply from what it knows the user is doing. This is already the case with smaller tasks, like code completion in Copilot, or grammar correction in Grammarly, but in the near future this will apply to more complicated tasks or workflows. Companies like Lindy.ai are already trying to build assistants in the browser to assist with any number of tasks. As new models are trained or fine-tuned to explicitly include browser and software actions, it will be less necessary to tell the model what to do and when.

Hopes for the Future

In the world beyond prompting, I hope we will still have opportunities for people to push these models farther than the developers might have originally intended. Glimmers of this reality are emerging in the generative art community. Artists are fine-tuning models with simple, technically approachable methods to create custom image generators that encode their special style. They then are releasing these custom generators back into the community. It is almost like, for them, the digital image is no longer the art piece. The custom model, the style itself is the art piece. The image produced is simply an artifact of the audience’s interaction.

Take for example, this “Floral Marble” style generator created by artist Joppe Spaa. The generator creates ethereal images of stunning beauty. Here’s the important part — it would be nearly impossible (or at least, extremely difficult) to create these images with the original model, simply with prompting.

Joppe Spaa’s floral marble demonstrate how artists are fine-tuning AI image generators to create pieces more extraordinary than is possible with prompts.


No matter how far developers push AI model capabilities, artists are finding ways to push them farther into new forms of beauty and utility.

Prompting today might be a little awkward and doomed to fade, but an element of it that I find positive is that it requires the user to think of something to say. Creative initiative is required to get results. I think this is healthy, even if it requires more effort. If we interact proactively, we will do more than automate our work, we’ll invent new forms of thought, creativity, and experience.

#futuresthinking #foresight

IFTF Foresight Essentials

Institute for the Future (IFTF) is the world’s leading futures organization. Its training program, IFTF Foresight Essentials, is a comprehensive portfolio of strategic foresight training tools based upon over 50 years of IFTF methodologies. IFTF Foresight Essentials cultivates a foresight mindset and skillset that enable individuals and organizations to foresee future forces, identify emerging imperatives, and develop world-ready strategies. To learn more about how IFTF Foresight Essentials is uniquely customizable for businesses, government agencies, and social impact organizations, visit iftf.org/foresightessentials or subscribe to the IFTF Foresight Essentials newsletter.