Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› Generative AI ›› It Is Time to Build the 2nd Generation of AI Products

It Is Time to Build the 2nd Generation of AI Products

by Varun Aggarwal, Kuldeep Yadav
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

The first generation of AI products was driven by hype and quick trends, but now it’s time for a shift. This article explores how the second generation of AI should focus on domain experts, build reliable systems, and prioritize AI-first workflows. It highlights the need for tailored models, well-tested solutions, and better AI-human collaboration, aiming for lasting, impactful AI products that deliver real value.

Hopefully, the first generation of AI products is over in this era! Some of these were simply prompts on ChatGPT/Dall-E/StableDiffusion to demonstrate a use case. Many of them got millions of users, no, sorry, viewers on Twitter and then vanished a day later. Then, there were others that were thin-wrappers on ChatGPT and masqueraded as usable products. Or, a nice chat UI/UX was stamped on existing or new products waiting for it to deliver magic. Everyone was competing to make them quicker, 30 days, 30 hours, 30 minutes, and even 30 seconds. They went viral too and then landed quietly in some corner without delivering value.

It is not that AI has not been disrupted and significant value cannot be created. That needs hard work and patience. Instead, there has been a huge FOMO creating a lot of heat (in lost dollars), noise (on social media), and little light (real value). Industry trends and economists have already started to point out the lack of productivity gains from AI failed projects and the untenable bubble we have created.

It is now time to make the 2nd generation of AI products. This foray has already started. If you are now over getting 10 minutes of fame on Twitter and mean real business, below are some principles.

Stop building AI for novices, land value for domain experts

You can easily generate a sales email for a recent untrained salesperson. Write a nice prompt, provide some context and older emails of the person, and woosh! a new email will be generated. The person will happily use the generated email, s/he are as clueless about what is good vs bad as your algorithm. The email will drive away the customer and piss off the boss!

On the other hand, an expert user will simply throw such an email into the trash can! They will want personalization and the right use of their favorite elements in the email. They will demand reasons for the new elements you suggest — they will judge the good from the bad. They will want autonomy to invoke AI suggestions on certain parts of the email to improve them — this will require a smart UI/UX. Further, they would love seamless integration of information to contextualize the message (i.e. recent news of the company from their website, social media, or quarterly earnings). Finally, they would want the tool to improve over time — take less of their time, be more tuned to their needs, and deliver more value.

This is where we need to go. Such AI will deliver great value to the users and lead to actual productivity gains. It will really help the novice user, not deceive them.

It will improve AI itself!

AI needs to be well-tested and reliable

Your AI product needs to work. That means that if you are generating an image, a video, or an email, it should not work for the one use case you engineered to show it off. Based on your target audience – a function of the industry, geography, and level of users — you need to make a boundary around what kind of inputs your tool must support and work well with. You then need to test your product extensively for the distribution of inputs possible. Furthermore, if the tool encounters inputs it cannot possibly handle, rather than generating an unintelligent response, it should inform the user about its limits.

We all know that generative AI is hard to test — it leads to subjective outputs and it is not easy to automatically detect if they are fit for purpose. This is and will continue to be a major area of innovation. Thankfully there are theoretical frameworks on how to do this, offline/online evaluations, metrics, and tools for continuous observability.

Beyond the theory, evaluating models is a bit of art and engineering: build a reasonable benchmark set, iteratively engineer and innovate, look out for test set impurity, put in the right guardrails and constantly monitor. This is what will differentiate the good from the bad.

In the last year, there has been rapid progress in LLM evaluation and monitoring tools, and product builders should exploit these to build compelling products.

It is not one model fits all

People think the omniscient, omnipotent being has finally descended on earth — one model that will do everything, and serve all use cases. Unfortunately, there isn’t even one human being that fits all tasks. Different AI models and engineering layers provide trade-offs on cost, latency, and deployment feasibility. Based on your use case, you need to select and engineer a model.

For example, if the application is real-time video avatar or video-editing, you will need to get low latency in model response — by either or both AI and engineering. You may need more than one model. For example, you may use a fast, relatively inaccurate model for lip-syncing during editing and finally a high-quality, high-latency model in the final rendering. Or if you need the service cost to be low, you need a smaller model, caching, dynamic model selection based on the query, or one of these methods.

The bottom line is that you need to work hard and smart with your model(s) and MLOps to land value. Quick off-the-shelf models are good for prototypes and MVPs, but their utility stops there. Building real-world products will require careful selection, and orchestration of multiple models.

One real-world product that has adopted this is Cursor, which is an AI-native coding IDE and uses a combination of off-the-shelf and custom-trained models to deliver a truly delightful experience to its users.

Invent an AI-first workflow

The other much talked about point is AI delivering value in the workflow of the user, in the interface s/he is working in default. This is much needed, but the real disruption is where the workflow itself is AI-first.

AI builders need to dig deeper into the domain (i.e. legal) and map/understand the entire user workflow. They need to evaluate how the end-to-end workflow can be AI-first or AI-native.  It is not a band-aid on a product first made without the power of generative AI. Rather, conceptualize the product and build the technical architecture considering the gen-AI revolution.

For example, ask the question: what will an AI-first YouTube be? This needs the boldest of entrepreneurs to disrupt the incumbents. The time has come.

Turn human-AI friction into great AI-human collaboration

AI needs to work well with humans. One needs to balance automation with human autonomy. Simple example: Let us say AI writes something for you. If you don’t like it, throw it away and get frustrated. How about, the product allows you to tell AI what level of edit you need (light/medium/heavy) and it gives you the response in a typical review mode, where you can reject/accept suggestions? AI can also track your changes to its generated output and make sure that it learns to personalize to the needs of the person. It may look like a fairly simple feature, but most developers don’t think this way and render the product unusable.

Once again, Cursor is a good example of this. It gives you line-by-line edits in code and the ability to accept/delete. Further, the UI/UX allows specific queries and tasks to be done on the code.

If you haven’t built your AI product, there is nothing to worry about. You haven’t missed the bus. You can make your own — a bus, which is not a bus — but rather a new way to think about (gen-AI first) transport. That is how AI has to be thought of.

The article originally appeared on LinkedIn.

Featured image courtesy: Mariia Shalabaieva.

post authorVarun Aggarwal

Varun Aggarwal
Varun Aggarwal, an MIT alum, is the Founder of Change Engine — an innovation ecosystem builder that runs an AI startup accelerator for early-stage AI companies. He is also the co-founder of Foundation for Advancing Science and Technology — India, a non-profit dedicated to transforming India's science and technology ecosystem. Varun co-founded and sold Aspiring Minds, India's largest job skills testing company, creating globally recognized products like AMCAT and Automata.

post authorKuldeep Yadav

Kuldeep Yadav
Kuldeep Yadav is an AI leader with a decade of experience developing AI products that serve millions of users. Currently, he is the SVP of AI and Labs at SHL, where he leads a global team to create innovative AI-powered HRTech software and platforms. He has previously worked as a Research Scientist at Xerox Research and was a founding member and CTO of VideoKen. Kuldeep holds a PhD in Computer Science and has published several research papers in reputed AI conferences.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article critiques first-generation AI products, highlighting the need for AI solutions to address real problems.
  • It advocates building for domain experts, ensuring AI reliability, and using tailored models for specific tasks.
  • The piece stresses creating AI-first workflows and improving AI-human collaboration for better productivity.

Related Articles

Discover how AI-powered gesture-based navigation is redefining app experiences, making interactions more intuitive and personalized. Explore the opportunities and challenges of this design revolution.

Article by Kevin Gates
Designing Serendipity
  • This article explores the role of AI in enhancing app navigation through gesture-based interactions, emphasizing a shift from traditional menus to intuitive, swipe-driven experiences.
  • It examines the intersection of AI and interaction design, highlighting how machine learning can support user discovery by anticipating needs and surfacing relevant content.
  • The piece critically assesses the potential of gesture-based navigation to improve accessibility, user engagement, and overall app usability, while addressing design challenges and potential pitfalls.
Share:Designing Serendipity
11 min read

Discover how AI is changing UX research. It’s not just making data analysis faster. It’s also encouraging people to think more deeply. Learn how to strike a balance between human insight and AI-driven efficiency to create more thoughtful designs.

Article by Charles Gedeon
How AI and Metacognition Are Shaping UX Research
  • The article talks about how AI can speed up data analysis and encourage people to think more deeply about biases and missed insights, which can improve the quality of user-centered design.
  • It shows that AI-powered UX research tools need to include reflection checkpoints. These checkpoints let researchers critically assess their assumptions and conclusions.
  • The piece highlights the collaboration between AI’s ability to recognize patterns and human judgment to make sure the research outcomes are meaningful and consider the context.
Share:How AI and Metacognition Are Shaping UX Research
4 min read

What if your brain could merge with a computer? BCIs are revolutionizing healing, learning, and thinking — but with risks like privacy threats and loss of autonomy. Explore the future of merged consciousness and how to harness it wisely.

Article by Oliver Inderwildi
Navigating the Convergence of Mind & Machine: On the Neural Frontier & the Implications of Merged Consciousness
  • The article explores how brain-computer interfaces (BCIs) are pushing the neural frontier, enabling breakthroughs in treating neurological disorders, enhancing human, cognition, and ultimately increasing our understanding of the brain’s functioning.
  • The piece defines the concept of merged consciousness and discusses its ethical and societal risks, including loss of autonomy, data privacy concerns, and potential socioeconomic divides.
  • It highlights the role of neuroplasticity in human-computer interaction, showing how feedback loops from technology accelerate learning and adaptation.
  • It calls for innovative policymaking to balance rapid technological advancements with safeguards, ensuring BCIs benefit humanity without compromising our future
Share:Navigating the Convergence of Mind & Machine: On the Neural Frontier & the Implications of Merged Consciousness
16 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and