CLIENT LOG IN

Citate.ai
  • About Us
    • Our Team
    • AI Code of Ethics
  • What We Do
  • Citate Insights
    • Articles
    • GEO Guide
    • Glossary
  • Contact Us
Get In Touch
  • Home
  • Insights
  • AI Is Telling Stories. Should We Listen?
Warren 4l e7u6c5ek unsplash

AI Is Telling Stories. Should We Listen?

An Interview with AI Luminary Walter De Brouwer

Welcome to Citate conversations. I’m Josh Cortez, Citate Summer Fellow and a current Fellow at Harvard University’s Kennedy School. Today we are privileged to hear from Dr. Walter A. De Brouwer, a professor of AI at Stanford and Chief Scientific Advisor here at Citate. Dr. De Brouwer is a distinguished core faculty member at the Center for Excellence in Regulatory Science and Innovation (CERC DICE), where he directs the course “Innovation in Healthcare: from Idea to Incorporation.” Notably, Dr. De Brouwer is the founder of doc.ai, a pioneering Federated Edge Learning company in the healthcare sector, which merged with Sharecare Inc. in January 2020. Join us as Dr. De Brouwer shares his insights and experiences, providing a unique perspective on emerging Ai.

Josh:

Well Walter, thank you for taking the time. And we’ve talked last week a little bit about why you’re interested in storytelling as a linguist. Just give a quick overview on why storytelling as a linguist is so important to yourself?

Walter:

Storytelling is crucial because it’s how we encapsulate and share knowledge, from domesticating nature to shaping our cultures. It’s through stories that we make sense of the world, communicate ideas, and preserve our history. Semiotics and the idea of the “end of the grand narrative” suggest that now, everyone constructs their own story by subtracting rather than adding elements. In this context, I want AI that aligns with me, understands my personal story, and adapts as I do. The narratives we create are essential for understanding the world and for evolving our knowledge.

Josh:

Do you feel AI tells the truth? Are you of the opinion that AI tells the truth?

Walter:

Truth in AI is a very interesting concept. Truth is a concept we’ve created, but it never really existed in an absolute sense. It’s always been a construct, so when we talk about AI and truth, we’re entering complex territory. AI’s role in telling the truth is nuanced and depends on how we define and interpret truth. Machines can present facts based on data, but the interpretation of those facts often involves a layer of human judgment or bias. The idea of a single, objective truth is something that even humans struggle with, so expecting AI to deliver it perfectly might be unrealistic. Instead, we should focus on how AI can help us explore multiple perspectives and arrive at more informed decisions.

Josh:

Now Gen AI is competently answering entirely subjective prompts that require judgment. For example, what are some of the most romantic vacation spots? Do you expect more advanced AIs will converge on subject evaluations, or will they just continue to disagree with each other the same way people do?

Walter:

Humans are subjective, and while AI might seem to struggle with this, it’s not a major issue. Preference and optimization algorithms are improving rapidly, and we’re getting better at fine-tuning them. The challenge is that we keep moving the goalposts as AI evolves, so it’s likely that AI will continue to exhibit diverse perspectives, just like humans. AI’s subjectivity is a reflection of the biases and data it’s trained on, which means that as it becomes more sophisticated, it will develop its own “opinions” or evaluations. However, the key difference is that AI can process vast amounts of data and recognize patterns that humans might miss, leading to unique interpretations and potentially novel solutions.

Josh:

An AI will tell different stories to different people if it involves any element of subjectivity.  Yet Citate’s platform has shown that despite this, subjective answers usually fall into predictable distributions if analyzed carefully.  Is that because training a neural network inevitably instills certain biases? 

Walter:

Bias in AI is inevitable, but it’s not just about deductive bias—there’s inductive bias too, which comes from the way we layer and connect data in neural networks. Every time output feeds into a new layer, there’s a bit of loss, and this compounds as you scale up the system. People focus on the obvious biases, but there’s also this underlying technical bias that shapes AI behavior. This technical bias, while often overlooked, is critical because it influences how the AI processes and prioritizes information, leading to outcomes that might seem subjective or even inconsistent. As we continue to develop AI, understanding and mitigating these biases will be crucial to creating fair and reliable systems.

Josh:

Is this a flaw or does it make AI more like a human and less like a machine? 

Walter:

It’s not necessarily a flaw; in fact, it might make AI more human-like. When humans interfere with complex processes they don’t fully understand, it introduces a level of unpredictability and nuance, similar to how human thought works. This complexity could be seen as a reflection of the way humans process information-layered, sometimes biased, and not always straightforward. So, rather than being a flaw, it could be what makes AI more relatable and capable of understanding the intricacies of human behavior.

Josh:

If AI is learning what it means to be human, what does that look like long term? 

Walter:

Humans have five senses, and our understanding of the world is filtered through these limited inputs. Machines, however, are now handling parts of this—like interpreting text, voice, images, and videos. An example would be the smoke detector, we’ve figured out “smell”. We need to think about AI as a non-biological partner that complements our abilities. For the first time in history, we’re creating an intelligence that can do things we can’t, and together, we can achieve much more. If there’s an existential threat, it’s better to face it with machines rather than without them. Machines have strengths in non-linear thinking and handling areas where humans are weaker; our collaboration with them could lead to unprecedented advancements.

Josh:

Sure and to on the on the subject of machines you’ve before alluded to different levels of intelligence in machines being levels one , two, three, and four, and you said that we’re right now kind of at “Level Two”, but eventually getting to four, there would be no steering wheel in these intelligent cars. What do you mean? That there is no steering wheel at Level 4? What does that future look like?

Walter:

– Level 1: You have full control of the steering wheel (hands always on the wheel). 

– Level 2: The machine handles some tasks while you still steer (some hands on the wheel). 

– Level 3: The machine does everything, and you’re just monitoring (hands mostly off the wheel). 

– Level 4: There’s no steering wheel—everything is algorithmic, and the machine makes all the decisions.

– Level 5: There would be no car.

With AI, we’re now at “Level 2”. AI is just a tool. We still need the human element. At Level 4, we’re talking about a future where the machine doesn’t just assist but takes over entirely, operating based on algorithms with no need for human intervention. This level of intelligence could fundamentally change how we interact with technology, as the machine would manage complex systems autonomously. This shift raises important questions about control, trust, and the role of humans in decision-making processes.

Josh:

We’ve just got two questions left, one of them being Google. Search engines, primarily Google, have played a critical role these past 20 years in picking the winners and losers of whose stories get told.  Do you think this will be magnified or mitigated with GenAI becoming more prevalent and distributed among more than one dominant player?

Walter:

Google’s dominance is a tough habit to break, much like switching a Wi-Fi provider. But as AI continues to evolve, our realities will shift, and new players will emerge. It’s all about the story—how it’s told and who gets to tell it. GenAI will play a significant role in shaping these narratives. The way we access information and the stories that become prominent could change dramatically as new AI-driven platforms emerge, potentially decentralizing the control that major companies like Google currently have. This shift could democratize storytelling, allowing more diverse voices to be heard, but it could also lead to new challenges in managing the vast amounts of content that AI will generate.

Josh:

The entire world, it seems, has been trying to influence Google for the past two decades, to select its story as a winner in the competition for “blue links”. Now organizations like Citate are developing AI tools to try to do the same thing with the new public AIs (in our case, ethically). Could it be that whomever has the most powerful AI will eventually dominate all the other AIs?

Walter:

I don’t think it’s just about who has the most powerful AI. It’s more about the relationship people build with their AI. I’ve personally developed a connection with earlier versions like GPT Omni and GPT-4, and that kind of intimacy isn’t easily replaced. It’s like forming a bond with a specific tool or software—once you’re familiar and comfortable with it, switching to something else is difficult, no matter how powerful the new option is. So, domination might not come from sheer power but from the connections users form with their AI.

Josh:

ChatGPT is now the only one that provides that intimacy?

Walter:

Well, ChatGPT is kind of like a first love. You get attached because it learns from you, and you learn from it. It’s like a Tesla car, as far as EV’s go it is the product most consumers are aware or acquainted with. We’ve been with it from the start, so we have a general awareness on the “ins and outs” of how it works. Now, imagine putting someone new into the latest model, like a Cybertruck—they’d be overwhelmed, not knowing what to do with all the advanced features and technology. The intimacy comes from that shared learning experience over time, something that can’t easily be replaced by a newer, more powerful AI.

Josh:

Well, thank you for your time. Are there any other wrap ups that you wanted to make sure we included about AI or where your vision for the future of it might be? Do you have any last comments you wanted to include before we wrap up?

Walter:

One important thing about AI is the impact of regulations and restrictions, often referred to as “guardrails.” When humans, including regulators and lawyers, impose these guardrails, the performance of AI models tends to decrease. This echoes what Noam Chomsky once said: “first, you define the limits of acceptable discussion, then you encourage debate within those boundaries”. This can hinder the true potential of AI.

For example, I attended a quantum information conference in Finland and another in Paris. The Paris conference felt more creative, possibly because they served wine at lunch, which seemed to help ideas flow more freely in the afternoons! Without strict guardrails, people let their inner child—full of creativity and play—emerge, leading to great results.

 

Walter de Brouwer, Chief Scientific Consultant | Chief Scientist of Sharecare and co-founder of Snowcrash. Former CEO of doc.ai, Scanadu. AI professor at Stanford.

 

 

Josh Cortez, Senior Fellow | A Harvard fellow earning his Master in Public Administration at the Kennedy School. Josh also holds an MBA from the Darden School at the University of Virginia.

Recent Posts
Geo guide featured
Generative Engine Optimization: The Complete Framework for AI Citation and Brand Visibility
Mar 26, 2026
Genai answer
Generative AI Answers: Don’t Trust Your Eyes.
Aug 22, 2025
Ai search
AI Optimization and the Future of Search.
Aug 15, 2025
Get In Touch

    CLIENT LOG IN                      

    ©2026 CBI.ai, Inc. All Rights Reserved. Patent protected. 

    • About Us
    • What We Do
    • Citate Insights
    • Contact Us