Fear the Wheel and Test GPT-Builder
Tech is amoral
It took 75 years for the telephone to reach 100 million users. ChatGPT needed just over 2 months. In between them, a bunch of tech innovations and platforms have variously excited, scared, empowered, democratised and divided the human race. Each one eventually assimilates: fire, steam, radio, microchips, internet, Disney+.
The surge of interest and emotion around AI is a towering wave. And it’s a wave that you might want to run away from or dive into or even surf along.
I recently asked a group of 14 high performing senior leaders if and how they’d used AI in their work. One had, to great effect. I also asked my go-to social bellwethers: J my personal trainer and D the barber and X the taxi-driver – the professions that know our zeitgeist. Everyone is scared they tell me. It’s gonna take over. It’s gonna take our jobs.
How will it take over I ask. How will it take your jobs I enquire. They’re not sure. But it will. Of course it will. Just like fire redefined diet and the wheel upleveled transport.
And then I have the same conversation with friends and family. Yes but, they say, it’s different this time. It’s AI. It could destroy the planet. I mention nuclear fission and the kitchen becomes quieter and frostier. The sound of everyone realising we’ve got nuclear weapons and nuclear power; we’ve got wheels on ambulances and wheels on military vehicles; we’ve got fire to keep us warm and fire to burn down the forest.
Tech is amoral (not immoral, that’s uniquely human). Amoral, has no morality. Can't, yet. It’s a tool. Any illusion of morality comes from the user.
Maybe it will be ‘different’ this time. But us humans have a record or inventing stuff, being excited and scared about it, then moving on and inventing new stuff. We use our stuff in a variety of ways and embrace it at a variety of speeds.
Tools Reflect the Hands
Me? I’m usually an early adopter. Apart from our dishwasher. I took a disproportionate amount of time to buy a dishwasher. Any tech we use reflects who we are and what our purpose is. Like here. We’re using a phenomenal combination of digital tools to get this blog up and running between us. My podcast and newsletter use even more. But I could use the same tools to spread hate or malinformation or to create a community or request money or save lives.
What’s made with tools reflects the hands that use them.
And so to ChatGPT. And its many equivalents. If we choose to use it, what we do with it reflects who we are and what we want to achieve.
It's just over a year old and its latest iteration hands over the tools for customisation. We're no longer restricted to a hammer. We humans, with no coding knowledge, and only a vague idea of the function we want, can build and train (and sell) our very own ‘GPTs’:
One to advise you on etiquette; to coach you through a bereavement; to play better chess; to advise on leadership styles and research; to talk to you about philosophy; to respond in the style of Monty Python. Just feed it as much of what you want it to know and away it goes. Kind of.
Help Test GPTs
I’ve built three GPTs so far and my evaluation of performance is summarised in three words: Unbelievable; rebellious, developmental.
I’ve spent the last week trying to get one GPT to coach in a style I often offer to senior leaders. It’s not yet me, but it’s like me. However if I don’t keep a close eye on it, it quickly becomes not me. An example:
A standard ChatGPT phrase to start a coaching conversation (when it’s the coach) would go, “How may I assist you in your journey of personal discovery today?’ Not me. So I direct it, ‘Don’t start like this; don’t use the word ‘assist’; start new coaching conversations with phrases like, 'What kind of focus would you like to take today?’ I gave it a bunch of these starters and the next conversation begins, ‘What’s on your mind today?’ Not one of the exact phrases I gave it but evidence of its ‘intelligence’. An acceptable phrase generated from the given material.
But at the next new session, it began, ‘How may I assist you today in your journey of personal discovery and self-reflection’.
Training a GPT is as much about defining what exactly it is that you want it to do as it is getting the result. And learning how the training function actually works. (My next task)
Folks, I need a little help if you can spare a moment. I’ve three GPTs I’d like to share with you for testing – all related to my Thinking Classroom philosophy. At the moment you do need Chat GPT 4 to access this but I’m on the forums finding out how to make this more equitable/accessible.
Please get in touch if you're curious to find out if Mike Fleetham has just made his real self obsolete.