• StrategicAI
  • Posts
  • AI's dirty secret, ChatGPT for brainstorming, employer AI vs employee AI, and is open-source AI like open-sourcing nuclear weapons?

AI's dirty secret, ChatGPT for brainstorming, employer AI vs employee AI, and is open-source AI like open-sourcing nuclear weapons?

AI’s dirty secret

I attended a data science and AI event last week at the London School of Economics. One of the sessions was about how AI could help accelerate tackling climate change. Every time someone uses the word ‘accelerate’ in the context of dealing with a global problem, I feel like hurling something at the screen. The assembled panel of esteemed guests were keen to pitch their respective AI-driven products/services – a tool to allow easy querying of climate policies and something to do with making it easier for companies to report on their sustainability metrics.

However, no one really talked about the dirty secret at the heart of AI (and data centres in general). I don’t mean deepfake porn or the wholesale theft of copyrighted content. No, what I’m talking about is the massive environmental cost of all those inconsequential emails that we get ChatGPT and its ilk to write. The few stats that we have are breathtaking. Google, for example, plans to build a 72-acre data centre in drought-affected Uruguay (see this excellent write-up in The Guardian) which will use the same amount of water to cool its servers as the domestic daily consumption of 55,000 people. Another example is that training ChatGPT-3 alone consumed about 800,000 litres of water per hour – enough to meet the daily water needs of 40,000 people. It is estimated that a typical lengthy AI conversation consumes a 500ml bottle of water. As these stats only reflect data from 2023, one can only assume the real picture is far worse, especially if we factor in the release of bigger and better large language models (LLMs) such as ChatGPT-4, Gemini, Claude 3 and the mushrooming of open-source models such as Llama 3. Combine this with the general explosion in AI usage, and an AI-first culture, and you have huge insatiable demand for and consumption of increasingly scarce water and energy resources.

The counterargument is that the process of training AI models will become more energy-efficient, and some clever widget will be invented which will… blah blah blah. The reality is that in the current land grab for AI dollars, no one cares about energy efficiency or saving the planet, and AI developers are not incentivised to reveal their dirty secrets.

So what can we do about it? Let's be frank, the average person only cares about environmental impact when it affects their ability to Netflix 'n' chill or get ChatGPT to write their reports. However, as consumers we have a responsibility to pressure Big Tech to publish data on their carbon footprint and water and energy consumption. This can also be used as a stick to beat them into using some of their trillions to drive renewable energy adoption and support efforts to improve water resource management.

How to use ChatGPT for brainstorming

AI tools can be great for brainstorming, but if you've noticed the ideas they give you are a bit samey, a new working paper from the Wharton School at the University of Pennsylvania could help.

The paper tested a bunch of different prompts to see if any could improve the variety of the ideas generated by an LLM.

They used techniques that prior research had found to be successful:

  1. Idea-prompted GPT: Include seven successful ideas from prior research as examples for the AI to use as inspiration.

  2. Threats, tips, pleading and emotional appeals: Try emotional prompts such as telling the AI it'll get fired or threatening to turn it off if the ideas are too similar. Weird, but this has been shown to help ChatGPT come up with better answers.

  3. Persona modifiers: Ask the AI to act like a widely known entrepreneur (such as Steve Jobs or Sam Altman) or to act like a (generic) "extremely creative entrepreneur" and to generate "good", "bold", and "diverse and bold" ideas.

  4. Similarity information: Give your bot five great ideas and include information about how similar they are to each other. Then, ask the chatbot to generate new ideas while considering the similarity information provided.

  5. Chain-of-thought: Use a two-stage prompt. First ask ChatGPT to generate 100 ideas, then ask it to edit those ideas to make them bold and different.

What I’m finding more useful is having conversations with AI, rather than bombarding it with questions. Just ask your favourite AI to do something, see what it does, and ask it to modify its response on that basis. This is often the easiest way to very quickly learn what an AI is capable of.

Companies are already using Sora to create amazing videos

OpenAI have started granting Sora access to a select group of designers and film-makers. The above was created by a film-maker to showcase what can be done with Sora and it's impressive. Once Sora and similar tools are widely available, expect to see a deluge of GenAI video, especially for marketing and music videos.

AI has already started to be used to fill in background scenes (RIP compositors) and there was news last week that Hollywood's biggest talent agency, Creative Artists Agency (CAA) – which represents the likes of Scarlett Johansson, Brad Pitt and Margot Robbie – is already preparing to ‘clone’ and license their talent.

In a Los Angeles warehouse, CAA has partnered with AI companies to start scanning the body, face and voice of their stars to capture their persona. The plan is to allow these AI replicas to be used to reshoot scenes, speak their lines in a different language or superimpose their face on a stunt double.

The relentless march towards AI film generation continues and we should expect to see feature films created by prompt engineers within 18 months. But will they be any good? Probably not. If you think back to the films that have had the biggest impact on your life, they are probably tear-jerkers that centre on simple tales of humans doing human things. AI systems, with their access to a vast repository of data on what makes us laugh, cry or scream, will soon be able to feed our insatiable desire for stuff to watch. However, rewatching a movie classic over the weekend reminded me that it takes an army of skilled, passionate and crazy creatives to strike movie gold.

Recruiter AI and candidate AI duel it out on the jobs battlefield

Finding good people and good jobs is a perennial pain point for employers and employees alike. AI is only making this worse. Jobseekers are turning to AI to sniff out openings, customise their applications, auto-submit their applications and even train for interviews. On the other side of the fence, employers are using AI to sift through candidates, streamline rejections and the recruitment workflow, evaluate applicants and even conduct interviews.

As early as 2020, more than 90% of employers were using software (including, but not limited to, AI) to initially rank or filter mid- and high-skills candidates, according to a survey of more than 2,000 executives. About a quarter of HR departments are leveraging AI, mainly for talent scouting; that number is only expected to grow and become the norm rather than the exception.

My issue with this is that it is not fundamentally making the hiring or job-seeking process any better. Employers are forced to use tech to sift through the applicants who mindlessly apply to anything, and great candidates struggle to get seen and often end up in the 'computer says no' pile because they don’t stuff their CVs with enough keywords or seemingly fail to meet must-have requirements.

I blame much of these woes on LinkedIn which, like it or not, owns the recruitment game. AI-driven recruitment does a disservice to both employers and jobseekers and generally leads to frustration all round. For me, it is yet another example of tech falsely riding to the rescue of a broken process.

Is this the future of combat?

No, this is not a belated April Fool's joke. A US company recently released the Thermonator – a flamethrowing robodog that can be yours for just $9,420!

Footage of the fiery fido posted on the company’s website shows it menacingly prowling around and leaping across a variety of terrain, from woodland to snow, while incinerating anything in its path.

According to the site, it weighs in at 2st 9lb, is under one square foot in diameter, has a battery life of up to an hour and can be controlled remotely via Wi-Fi or Bluetooth. A first-person video feed also enables users to sit back and enjoy the roasting of any poor soul that strays across its path!

While this may well be a marketing gimmick, it does raise the wider issue of increasing AI adoption by the military–industrial complex. The most egregious example of this is Palmer Luckey, the goatee-sporting, Hawaiian T-shirt and flip-flops-wearing tech ‘dude’ who invented the Oculus Quest VR headset and eventually sold it for $2 billion to Meta.

Rather than sailing off into the sunset, Luckey has reinvented himself as an arms dealer – a bit like a more successful version of the Justin Hammer character in the Marvel movie Iron Man.

Palmer Luckey: Anduril Working on AI Weapons to 'Swiftly Win Any War'

The FT did a great piece on him a few weeks ago, in which he is almost lauded as a modern-day Oppenheimer for his efforts to create a new class of AI-manned killer drones. Chillingly, he says:

"The way to frame it would be that I want to give ourselves a technology that turns the world stage, as it pertains to warfare, into the United States being an adult in a room full of toddler-sized dictators." - Palmer Luckey

With countries using AI to create kill lists for unmanned drones, the weaponised AI horse has already bolted, creating an urgent need to rewrite the international rules of engagement and tackle the thorny question of who is ultimately responsible if my Thermonator goes rogue.

In other news

  • The US Congress approved a bill banning TikTok unless its Chinese owner sells the platform. A bunch of companies that rely on TikTok for reaching Gen Z consumers assess the impact.

  • Geoffrey Hinton, the computer scientist who is often called 'the godfather of AI', calls on regulators to ban open-source AI models. He says “open-sourcing AI is completely crazy [...] it's like open-sourcing nuclear weapons.”

  • Anthropic releases a useful library of prompts to get the most out of Claude.

Tools we're playing with this week

  • None - I took a digital detox and spent the weekend in a forest, building a wonky tree house and showing my children that fun doesn’t have to be digitally-mediated. I highly encourage you to do the same thing.That's all for this week. Subscribe for the latest innovations and developments with AI.So you don't miss a thing, follow our Instagram and X pages for more creative content and insights into our work and what we do.Thanks for reading Strategic AI! Subscribe for free to receive new posts and support my work.