• StrategicAI
  • Posts
  • Be wary of AI fools rushing in, Scarlett Johansson goes to war with OpenAI, Microsoft wants to be Big Brother, crazy things you can do with ChatGPT

Be wary of AI fools rushing in, Scarlett Johansson goes to war with OpenAI, Microsoft wants to be Big Brother, crazy things you can do with ChatGPT

Thought of the week: Fools rush in where angels fear to tread

Two of OpenAI’s founders, Ilya Sutskever and Jan Leike, resigned on Tuesday due, one assumes, to disagreements over the irresponsible development of AI at the company. This was big news in the AI world because:

a) the pair were leading the company’s superalignment team, which is tasked with ensuring that AI is built to serve – rather than control – humans

b) these guys were also behind the attempt to sack Sam Altman last year.

While to outside observers this might all seem like a storm in a Big Tech teacup, it points to the problems with the “move fast and break things culture” in Silicon Valley.

It's ok for companies to experiment with hardware such as the massively overhyped and largely useless AI wearables such as the Humane AI Pin and Rabbit R1 devices. However, building massive AI models based on ripping off the Internet and then letting them out in the wild has huge implications for us all.

From OpenAI et al. scraping the Internet, to Microsoft thinking it’s ok to change the operating system that runs most of the world’s computers so that a built-in AI can spy on users and remember everything they’ve ever done on a device, there is a sense that serious decisions about how we live, work and play are being rushed through without anyone thinking through the implications. Below is Leike’s post on this, which is worth reading for its transparency.

OpenAI plays fast and loose with Scarlett Johansson

OpenAI made a rare PR misstep after using a voice actress to mimic Scarlett Johansson's voice and using it as one of the flirtatious AI voices during its spring update demo last week.

OpenAI CEO Sam Altman had reached out to Johansson months prior to license her voice for the AI project. However, the actress declined the offer.

So you can imagine how annoyed she was when OpenAI debuted a new voice called Sky which seemed eerily similar to a version of Johansson’s voice from the film Her (which is about humanising our relationship with technology).

OpenAI has refuted claims of intentionally replicating Johansson's voice, maintaining that Sky was developed using a different actress' natural speaking voice. Despite this assertion, Johansson released a statement expressing her annoyance and OpenAI removed the Sky voice. You can listen to her statement, read by the Sky voice, below.

Johansson is now pushing for greater transparency and advocating for stronger legal safeguards to protect individual rights as AI technology continues to advance.

The incident has ignited a broader discussion about the ethical implications of creating AI systems that closely resemble human voices and personalities. More worryingly, Big Tech seems hell-bent on selling us all the ability to have our own "her" without worrying about the massive downsides of a world in which the loneliness cure is retreating into virtual relationships and not becoming more human. They would do well to re-watch the film.

Microsoft goes Big Brother

Not to be outdone by last week's major updates from OpenAI and Google, Microsoft introduced a whirlwind of AI announcements ahead of its Build conference. Microsoft is essentially rearchitecting Windows (which powers around 75% of all computers) to be AI-first.

So why should you care? Well, firstly because it will fundamentally change the way you interact with your PC. Microsoft wants its AI Copilot to be an ever-present assistant that you can talk to and that can do stuff on your behalf.

This is not new. Those of you with a good memory might remember Clippy (introduced by Microsoft in the 90s), an annoying and totally useless paperclip that was dubbed the world's most hated virtual assistant and that was swiftly retired.

However, Microsoft’s new Clippy now has an AI brain and, critically, will be able to see and remember EVERYTHING you do locally on your PC. This new feature, called "Recall", is an AI that watches and remembers everything you’ve done on your screen, and more. Initially only on a new family of AI-first PCs, Microsoft plans to put its Copilot on all systems running Windows, whether you like it or not.

We have completely reimagined the entirety of the PC – from silicon to the operating system, the application layer to the cloud – with AI at the centre, marking the most significant change to the Windows platform in decades. – Yusuf Mehdi

On the plus side, this means:

  • local AI on your computer will let you just tell it to make that boring PowerPoint presentation or find that report and will be, according to Microsoft, 20x faster and 100x more efficient than traditional PCs

  • your AI will be able to watch, hear and understand what you’re doing on your computer and answer questions in real time, so no more cutting and pasting into ChatGPT

  • AI can do the routine and boring stuff like typing out emails and creating Excel spreadsheets.

The clear downsides are:

  • Privacy is toast. Though Microsoft claims all the recordings that Copilot takes of your activities are encrypted locally, we all know that once you go online it’s like standing naked in front of a global audience of 8 billion people.

  • Here come the AI co-workers. We’re a hop, skip and a jump away from AI employees. If an AI Copilot can read an email from a client, understand the requirements, fire up MS Word and write a report, then why do you need the human?

  • Do we really want Microsoft (or any other Big Tech company) dictating what we do on our computers and by extension all our tech, from mobiles to TVs? Fundamentally, AI is fuelled by data. Extending AI to our local devices just gives Big Tech access to oodles of free data so that they can know us better than we know ourselves and sell us more stuff that we don’t need.

Interestingly, Google have pointedly avoided the “Her”-like virtual assistant path. Recognising the potential drawbacks of anthropomorphic AI, they published a paper saying that such assistants could “redefine boundaries between ‘human’ and ‘other.’” This could lead to harms, such as a user deferring important decisions to the AI, revealing sensitive information to it, or emotionally relying on it so much that if the AI made a mistake or gave an inappropriate response, the result could be disastrous.

How to recreate PacMan with ChatGPT in under 5 minutes

I spent a good hour yesterday playing with a really cool feature unlocked by last week’s release of GPT-4o. You can now upload a screenshot of a classic game, such as Pac-Man or Tic Tac Toe, and ChatGPT can code it for you in seconds and help you install it. If the AI doesn’t quite get it right, you can ask it to “Create a prompt to create Pac-Man”, paste the resulting prompt and then download and run the outputted code. I’ve never been nor wanted to be a coder, but lo and behold, ChatGPT managed to code it for me.

Now, ChatGPT isn’t actually coding anything, it’s just searching inside the trillion or so words it’s scraped from the web for code snippets, but you can see where this is all going. Once AI systems are let out in the wild, whether it’s through a wearable device like your smart watch or on your phone, they will be able to gather quadrillions of data points which in turn will make them smarter and more powerful. Extend AI to the Internet of Things – embedding sensors into street lights, pavements and walls, for instance – and we do indeed live in interesting times.

What we’re reading this week

That's all for this week. Subscribe for the latest innovations and developments with AI.

So you don't miss a thing, follow our Instagram and X pages for more creative content and insights into our work and what we do.