• StrategicAI
  • Posts
  • The satnav problem, why AI breaks processes, how to rig an election and undermine a princess, why ChatGPT is smarter than doctors and much more...

The satnav problem, why AI breaks processes, how to rig an election and undermine a princess, why ChatGPT is smarter than doctors and much more...

Thought of the week: are we all experiencing the satnav problem?

an AI generated image using midjourney: an illustration of a couple with their 8 year old son in a forest both parents looking worried and the wife looking at her phone

Over the Easter weekend, my family and I went on a walk in our local forest. Feeling a little adventurous, we veered off the beaten track and soon found ourselves slightly lost. To make matters worse, my diabetic son started experiencing a low blood sugar episode, and we needed to get back to the car quickly.

In this moment of mild crisis, my wife and I did what every couple does: we argued. First about whose fault it was that we forgot to bring some sugary snacks for our son and then of course about the fastest way back to the car. I was sure it was this way, she was sure it was that way, meanwhile my son was in the middle getting steadily fainter. To settle the matter, we did what everyone increasingly does – we asked Google, which confidently plotted the quickest route back to safety and supplies. Long story short, taking Google’s advice eventually got us back to the main and very well-trodden path.

What I found interesting was that even when we recognised a route we’d taken over 50 times, my wife still insisted on keeping Google Maps open. This made me wonder whether this is indicative of a wider issue. Are we slowly eroding our ability to follow our gut instincts and instead relying too heavily on potentially flawed decision-making engines? I’ve nicknamed this the ‘satnav problem’ – when you turn off your satnav app, subsequently get lost and then curse your stupidity for ever having doubted that Google is smarter than you.

This phenomenon isn't limited to navigation. Below I riff on the news that AI-based diagnostic tools are becoming common currency in our health systems. While these tools can be incredibly helpful, they may also provide a false sense of security. Doctors might begin to second-guess their own expertise and intuition, relying more on AI's suggestions than their own judgement.

Furthermore, there's the question of responsibility. When we outsource our decision-making to AI, who is accountable for the outcomes? If an AI-guided decision leads to an unfavourable result, is it the fault of the user, the developer or the technology itself?

My takeaway is that we all need to actively cultivate and preserve our decision-making abilities in the age of AI, and maybe even try an AI detox for a week.

Why AI fundamentally breaks processes

An AI generated image using midjourney: showing a literal bridge made out of glowing, digital code spanning over a chasm filled with discarded paper and files.

The UN is our biggest client, which means most of our work has to be won through a slightly tortuous competitive tender process. Tortuous for two reasons: firstly because the tender management systems our poor UN clients have to use are outdated and poorly designed, secondly because of the procurement process itself which, though it has gone online, has maintained the same paper-based idiosyncrasies.

As a result it can, and often does, take over six months for buyers to find a supplier, tenders often get cancelled due to procedural issues or a lack of bids, and all parties are rarely happy with the outcome. I suspect that AI is only making this (and other paper-based processes such as recruitment) worse. Anyone can now draft a perfect technical proposal or CV in seconds, which makes this way of assessment less valid.

I see this as another example of AI breaking formerly paper-based assessment processes. Now that AI can do our ‘homework’, from drafting our CVs to writing reports, how do we assess the merit of those – be they companies or people – behind the content? Getting AI to do this algorithmically might be a seductive option, but then we have the issue of entrenched bias and the lack of transparency.

Clearly, we need to come up with new and better ways to make evidence-based decisions, so expect major changes in how you apply for jobs and prove what you used ChatGPT to say you could do.

How to rig an election and undermine a princess, AI-style

Image showing red robotic hands voting into a ballot

Microsoft researchers highlight Taiwan’s recent elections as a cautionary tale of the power of AI-generated disinformation. Taiwan, like many countries, deployed a platform called Cofacts to allow citizens to fact-check information during the 2020 (pre-ChatGPT) presidential election. Cofacts responded to 80% of queries within 24 hours. However, by the next election, their response rate plummeted to 15% on election day because they were overwhelmed by the sheer volume of misleading content. Additionally, attempts to crash Taiwanese news websites spiked by 3,370% in the three months leading up to the elections, illustrating a calculated push towards the chaos of social media for news dissemination.

Similarly, the BBC reports that security researchers believe a Russia-based disinformation group was behind the spread of rumours and conspiracy theories about the Princess of Wales' health. Last month, X’s algorithms did their bit for social cohesion by actively recommending content falsely suggesting a video of the Princess of Wales out shopping was really a body double. Clearly disinformation is the new soft power and social media its willing accomplice.

ChatGPT is smarter than doctors

A study reveals that OpenAI's GPT-4 outperformed physicians in clinical reasoning. According to the researchers, GPT-4 scored higher than both attending physicians and internal medicine residents, showcasing its potential to aid in clinical decision-making. A small yet important caveat is that although the AI was better at diagnosis, it also got it ‘plain wrong’ more often than the humans.

Early studies suggested AI could make diagnoses, if all the information was handed to it. What our study shows is that AI demonstrates real reasoning – maybe better reasoning than people through multiple steps of the process. We have a unique chance to improve the quality and experience of healthcare for patients. – Adam Rodman MD

If the future of healthcare is rapidly heading towards AI-augmented diagnosis, then AI-driven diagnosis is the inevitable next step. This is great for improving access to cheap and 24/7 diagnosis and patient support, however we have another satnav situation.

If all diagnostic decisions, from triage to diagnostic testing and imaging, begin with AI, humans will be less likely to trust their own judgement. In the long run this may dull our wits and, like our dependency on satnav systems, reduce our innate ability to be guided by our experiences and those of others.

“Just ChatGPT it.” OpenAI moves to take on Google search

I’ve spoken extensively about OpenAI positioning itself as a successor to Google’s ad-polluted search. So the announcement on 1 April that using ChatGPT no longer requires registration was not in fact an April Fools’ joke but a calculated move to directly challenge Google as people’s first port of call when seeking answers.

CEO Sam Altman has said publicly that he’s interested in building a search product integrated into ChatGPT. And removing the sign-in element makes it that much easier for a user to quickly open up ChatGPT to ask a question, similar to how Google or Perplexity operate today.

What he also fails to mention is that ChatGPT use has fallen, so removing the registration requirement has a number of obvious benefits. It increases usage, provides more data on what people are looking for and also enables more free fine-tuning as people give responses a thumbs up or down (though I don’t know anyone who actually does this).

Other interesting AI news this week

A five way split of celebrities: Billie Eilish, Nikki Minaj, Katy Perry, Zayne Malik,
  • Artists against AI – Over 200 artists, including Billie Eilish and Stevie Wonder, signed an open letter calling on tech companies to cease development of AI music tools, asserting that such technology devalues the rights and work of human artists. Having played with Stable Audio I fully understand their position.

  • Does AI herald the end of language learning? Interesting but sad article on The Atlantic that reflects on the impact of AI on foreign-language education, highlighting a shift towards relying on sophisticated translation tools rather than actually learning a language. Despite AI's efficiency, it cannot replicate the profound human connections formed through the genuine understanding of another language, as personal experiences and expert opinions in the article suggest.

  • Film-making on a shoestring – How one film director used AI to turn a two-hour film shoot into a short film

  • Positive deepfakes – How a clever TikToker is using deepfakes of celebrities to teach maths

Tools we're playing with this week

Hume.ai – An app that brilliantly executes what Siri and Google Assistant miserably failed to deliver. If they can work out how to get this to work on a mobile device then say hello to your constant companion. I urge you to try the demo to see the future of voice AI.

Stable Audio – The people behind the leading image-generation tool Stable Diffusion have released an upgraded music-generation tool that lets you create songs up to 3 minutes long. I can fully see why artists are scared. I created this song in 30 seconds.

That’s all for this week, folks. Next week we’ll be bringing you more AI news, ideas and useful tools to support you during the AI transition. Subscribe for the latest innovations and developments with AI.

So you don't miss a thing, follow our Instagram and X pages for more creative content and insights into our work and what we do.