• StrategicAI
  • Posts
  • How to supercharge your learning with AI, ChatGPT gets a memory, unpicking the hype around AI predicting your death, and Big Tech vs the media

How to supercharge your learning with AI, ChatGPT gets a memory, unpicking the hype around AI predicting your death, and Big Tech vs the media

Thought of the week: I’ve seen the future of learning and I love it

My wife gifted me the luxury of a solo weekend to work on my AI book. Shorn of responsibilities and parental distractions, I spent some time refreshing my sketchy knowledge of philosophy. Specifically, I wanted to understand how some of the world’s greatest philosophers would think about AI and its impact on society.

So I spent the gaps between writing hanging out with the most patient and effective teacher I’ve ever had. After hours with my ChatGPT philosophy tutor, I had an experience so profound that it made me realise the flaws of current teaching methods and how even the crude version of AI we have today can already supercharge learning. So what’s so amazing about AI learning? Well, first, let’s define the current learning journey which is typically:

  • passive – someone ‘teaches’ you and you have limited choice over your tutor

  • generic – one-size-fits-all and there is limited scope for personalisation

  • one-off – most learning follows the familiar pattern of learn, assess and forget.

The future of AI learning is the exact opposite. It’s:

  • active and conversational – an AI tutor of your choosing (from Socrates to a realistic video avatar of your favourite celebrity)

  • customised – learning is based on how the learner learns (visual, practical, oral, etc.)

  • ongoing – learners are accompanied by a 24/7 coach with a memory, who doesn’t make you look stupid and who can gently guide your learning and help you apply what you’ve learned and on demand

The great news is that you can already experience a basic version of this future for free by downloading the ChatGPT mobile app (unfortunately neither Claude nor Gemini have this option).

  • Change the voice/language in the settings.

  • Enable voice typing in the app.

  • Click the headphones icon and speak to ChatGPT e.g. “Starting from the Ancient Greeks, give me a chronological introduction to the world’s greatest philosophers. Start with a 10-second introduction to the philosopher, explain in simple terms their three key ideas, tell me how these ideas apply to the modern world and an AI-driven future.”

Then, and this is the key step, discuss what the AI has told you. This might include:

  • if it’s too complicated, woolly or highfalutin, telling it to explain it as if you’re a 10 year old

  • checking your understanding by repeating back to the AI what you think it said

  • asking it to relate the explanation to something you already know.

What's so powerful is that it’s like having a free, personal and incredibly patient 24/7 tutor.

Clearly, learning in the future will be about hyper-realistic and immersive recreations of the past. If you’ve been watching Netflix’s 3 Body Problem, you can see how this might eventually look.

OpenAI has begun rolling out its ChatGPT memory feature, which means the bot can now apply what you’ve asked in previous conversations to new ones. Why is this important? Well, it lays the foundation for what I call '3C AI'. An AI coach/adviser/therapist that is:

  • customised

  • context-aware and

  • conversational.

This small but significant ChatGPT upgrade means that it will remember details from past conversations, enhancing its usefulness and personalisation. No more reminding ChatGPT to use the King's English or to avoid using certain words that are tell-tale giveaways that you haven’t edited its output. Moreover, it can remember who you are so you won’t need to repeat how to spell your name or address when asking it to write letters for you.

The memory function is optional and can be disabled at any time. If you want to keep a chat private, there's a 'temporary chat' option that doesn't use or remember any details shared during the session. Furthermore, memories are managed independently of chat histories, meaning deleting a chat won't erase its associated memories.

The schism in journalism

It’s been a torrid year for print journalism and the media in general. Having had their content ripped off by Big Tech, media groups have been engaged in an existential battle for survival in a world where people are increasingly seeking answers rather than stories.

In response, media giants have largely either sued or licensed their content to the likes of Microsoft and Google. A third option has been to use AI for content generation to cut costs further and help keep the lights on.

Following successive attempts by The New York Times et al. to shame Big Tech into paying up, another story broke this week of eight more daily newspapers suing OpenAI and Microsoft. The complaint accuses OpenAI and Microsoft of using millions of copyrighted articles without permission to train and feed their generative AI products, including ChatGPT and Microsoft Copilot.

News really doesn’t pay, so it feels like print journalism is in a death spiral, with content licencing/litigation one last gasp before giving up the ghost.

But the battle for ‘trusted’ news is hugely important for all of us in a world that is being deluged by synthetic content. Quality journalism is one of the last bastions of well-researched analysis of current affairs and, though paywalled content is growing, it is only read by a tiny proportion of the global population who increasingly get their news from algorithmic content generators, if at all.

You can now create a video from a single image

Researchers from Microsoft have introduced VASA, a cutting-edge framework designed to create hyper-realistic talking face videos from just a single portrait photo and speech audio. VASA-1, their premiere model, excels in generating videos where the lip movements are perfectly synced with the audio, and the facial expressions and head movements are lifelike, enhancing the overall perception of realism.

The VASA framework operates in real time, producing 512x512 video at up to 40 FPS with minimal latency, making it ideal for interactive applications involving virtual avatars. It leverages a face latent space to handle a wide range of facial dynamics and head movements, significantly outperforming previous methods in terms of realism and expressiveness.

Though only a research tool, it shows how quickly the tech is moving and how synthetic video will play a huge role in how we consume content in the not-too-distant future.

The truth behind the AI that can apparently predict your death

Last year, a small team of Danish researchers released research that on the surface claimed to be able to predict the day someone would die. The researchers had trained a large language model on publicly available data. The headline was that their model was astonishingly accurate at predicting when and how people would die.

The truth behind such claims, however, is far less sensational than headlines might suggest. The model, known as Life2vec, was developed by a team from the Technical University of Copenhagen and other institutions. It focuses on the predictability of human lives based on sequences of life events, much like how natural language processing predicts the next word in a sentence.

Misinterpretations began when the model’s predictive abilities were overstated in viral media coverage. In reality, Life2vec does not predict the exact day of an individual's death. Instead, it uses data from a comprehensive registry data set in Denmark, spanning years and including detailed information on life events related to health, education, occupation and more.

Furthermore, the model's reported accuracy of 78.8% needs context. Given that most people in the data set survive the four-year period, a simple prediction of "will survive" would also show high accuracy. The figure of 78.8% represents a more nuanced examination, where the model effectively discriminates between survival and death in a balanced scenario.

The internet buzz around an 'AI death calculator' is a misunderstanding and misrepresentation fuelled by sensational reporting and unauthorised claims by third parties. This case highlights the broader issue of how AI research is often sensationalised, leading to public misunderstanding about what AI can and cannot do. The researchers’ intent is to further scientific understanding and discuss the ethical implications of using AI in predictive analytics – not to create tools for predicting individual mortality.

More robots doing amazing things

While it seems that a new humanoid robot is being released every week, we've yet to see one move as quickly or with as much precision as the model just released by Chinese company Astribot. We’re getting closer to that laundry robot we all so desperately crave...

Other interesting AI news this week

Tools that we are playing with this week

ChatGPT – Yep, I said it. I’m back using the original, but mainly on my phone. Its native voice recognition is hands down the best – Gemini and Claude fail miserably at this basic killer application.

That's all for this week. Subscribe for the latest innovations and developments with AI.

So you don't miss a thing, follow our Instagram and X pages for more creative content and insights into our work and what we do.