The Sphere Network hasn’t run a face-to-face roundtable event since the start of the pandemic, so we were delighted to have the support of Muckle LLP who were our gracious hosts for the evening.
Jonathan Strutt was in the chair and started the discussions by asking for a show of hands to determine people’s current levels of experience of ChatGPT. Most people in the room understood what ChatGPT was about, some had already used it, and a few were actively using it at work.
One participant explained he had asked ChatGPT to explain what a PowerShell script is earlier that day. ChatGPT needed to break the answer into blocks, but apparently it did a good job!
Before the session, Jonathan had asked ChatGPT to tell him what Generative Al is. It returned a comprehensive introduction to ChatGPT plus an overview of the history of AI from 1950 – 2023, and the current trends for it being actively used in schools and workplaces.
Rather than presenting any more theory, Jonathan asked everyone to think about the key questions they wanted answers for during the session, regarding both ChatGPT and other forms of Generative AI.
The questions and subsequent discussions flowed freely all evening – we only had 90 minutes in the room but we covered an awful lot of ground!
Question 1: How does AI learn?
The data ChatGPT learns from is it an automated process. The answers you get depend on the licence you are using – either the free Open Al platform or an Enterprise licence. Having a licence gives users the option to keep their data private. ChatGPT’s data set is divided into five data books – two are public and three are not.
It was interesting to note that the information ChatGPT 3 uses has all been scraped from the internet prior to 2021 so it uses no real-time data. This is an important point for people to understand when reviewing any output from ChatGPT 3.
One participant informed everyone that Samsung’s source code was recently uploaded into ChatGPT, so that data may now be in the public domain.
No-one in the room really knew what are the other three data books are based on, so a big question is what is the correct version of the truth? What are these Generative AI systems really doing with our data?
Question 2: What are your fears around Generative AI?
The next question really got the conversation going, and it was obvious that people had a lot they wanted to learn. Almost everyone in the room had some contribution to make. The themes of the discussion included:
- Data integrity: How much could/will people’s views be influenced by AI-generated content? Where is the filter – if there is one at all? Is there a generational issue around the uptake/use of AI where the younger generation just ‘go for it’ and the older generation is more sceptical? Does AI generated-content sound right (or does it need a good edit!)? Should I question AI-generated content or can I take what I read for granted?
- AI ethics and trustworthiness: Is there a concern about the degree of mistrust of AI-generated content? Could a robotic identity be more trustworthy? Once you can have a quasi conversation with AI tools and add a ‘human’ voice, will that make people think that AI has a moral compass? Will AI-generated content be seen by us to apply empathy when our guard is down?
- Bias in AI: Many participants felt that AI was pulling bias from its Internet sources – one cited the example of a female professor actually have to argue against an AI system that that professes are not actually all male! Some thought that Al is being used to create misinformation and makes that type of biased activity a lot easier to generate. Al is all about the algorithm. How do you spot a biased algorithm? The response to the data is heavily biased and there is no real solution at the moment.
- Misinformation and ‘fake news’: AI can both serve misinformation and create information. It will follow your argument if you say ‘I want to say world is flat’ and find scientific references to prove that theory. The ‘theory’ can then be disseminated through channels such as Twitter, translated into multiple languages, and the misinformation spreads. No-one could explain how AI tools summarise existing reports, or how it vets its source material.
- Ownership of AI-generated content: A key question is who actually owns the results generated from Chat GPT or other forms of Generative AI. And that instigated a whole new line of discussion…
Question 3: Should we regulate AI?
We were being hosted by a law firm, so this was an interesting question!
The starting point of the discussion was to take us back to the start of the Internet ‘revolution’ and to think about what regulatory framework we could or would have put around the Internet before it was launched.
How do you put guardrails around technological advancements like this? How do you go about regulating something that people are already doing and is already pretty mainstream?
Can we treat the adoption of AI more like cyber security so there are some global standards and rules, but the regulation is different across different geographical locations? Standards in the UK will be different to those in the US but you are still relying on data that is ‘free to view’ anywhere on the Internet.
Question 4: Is there an opportunity to audit AI?
The conversation about regulation of AI was a little non-comital so we moved on to discuss if there was potential to audit AI-generated content.
One participant commented that, if there is no precedent, the data is used as the truth. For example, that data used to develop the algorithms used in the decision-making by autonomous cars comes from insurance companies. ‘Hit a person rather than a tree’. ‘Speed up to kill someone rather than slow down’. On the surface these decisions seem crazy, but if the data is generated by insurance companies, it’s ‘cheaper’ to hit a person and not write off a car than it is to hit a tree and have the car damaged beyond repair. It’s ‘cheaper’ for someone to be killed rather than have life-changing injuries for the rest of their life. Deep breaths all round, and definitely food for thought.
So if AI is generating content that makes ‘bad’ decisions, how do we audit it? Can you use case studies, and if you do, what are the consequences? Can AI learn data nuance to filter its decisions so it makes more ‘caring’ decisions?
Al takes a lot of the stuff people do out of the process, so how do you take human decision making and apply it to AI? Is it possible for people to evaluate why AI has made the decision it has, and will this become the role of humans going forward? Do we need to be able to test AI models and check what is happening to put the humanity back into it?
The Predictive Internet is developing at pace and there is an emerging need for Prompt Engineers to ask the correct questions at right time and to validate the assumptions that are drawn by Generative AI tools.
In certain sectors, AI will fuel a boom in jobs – just like computers and the Internet have. AI needs to consider ethics, and it’s the humans that will provide that.
Question 5: What are the opportunities for business in the North East and how do we create more opportunities?
Towards the end of the session, the conversation free-flowed, with some really interesting points being raised about the opportunities of AI:
- In the legal world, AI could be asked to write a confidentiality agreement but it wouldn’t understand the human intuition and nuance required in a contract or dispute. Lawyers ‘ jobs could manage blockchains to produce legal documents but there is still a need for the human conversation, and that adds a lot of value.
- The question was raised, are we in danger of creating a more divided society? Are we going to lose many jobs to automation and therefore have more people out of work, especially as we often struggle to be able to employ humans to do basic manual jobs, especially in cleaning and hospitality?
- Are manual trades potentially less at risk than knowledge-based workers due to AI?
- The academic world is excited about how Al is changing the way they teach and how they assess learning. Educators are being forced to teach differently and create roles that use people’s creative mind rather than their ‘AI’ mind. Lecturers have changed the way they interact with students and are looking at ways to add value with more face-to-face sessions.
- The Ambient Kitchen was created in Newcastle University back in 2010. Now someone with poor memory or eyesight can ask ChatGPT how to operate in this environment, which has to be a good thing.
- There are lots of positive advantages to the use of AI so we need to concentrate on these rather than just the negative aspects of the technology – although we need to be aware of the dangers too. Search technology isn’t very old – it appeared on the ‘scene’ and we didn’t really get a choice about using it. We now have no choice with Generative AI technologies – they are here so we need to understand how to use them.
- What are we not looking out for? How do we spot echo chambers of advertising, and pictures and videos that are false? How do we get the energy to keep validating this information? How do we know who or what to trust? Is there an opportunity to fill this gap?
- Automation can help people to work on more interesting jobs i.e. upselling and cross-selling at the end of a call sequence improving customer retention, so there is a lot of value in using AI in the correct way – to enhance rather than replace jobs.
- In the recruitment world, Linked In has helped to give great customer service and has generated a much larger pool of talent for recruitment companies. People like talking to people, and that won’t change.
- Al and healthcare is a big use case – preventative issues are the key things, so you are able to identify potential problems before they happen. It is fantastic when AI can warn you about something that hasn’t happened yet, but it’s all about trust. And would you trust an AI system that is fed data from a health insurance company that offers ‘Pay as you live’ health insurance?!
Question 6: Where will Chat GPT be in 2 years’ time?
The last 30 seconds of the session gave us two key themes to think about going forward.
- Generative AI will be ubiquitous – people just won’t think about using it in a few months’ time, especially as the number of easy-to-use tools grow, and standard software packages roll it out.
- AI will be big in health care – for example, with applications for mental health that might help to predict episodes before they happen.
So, a whistle-stop tour of ChatGPT and Generative AI – and lots of food for thought for the subject matter of future Sphere events!
Fantastic blog Marianne, and as you say a lot of good for thought for what is an ever and quickly evolving technology.