Phil Smith CBE
Interim Executive Chair of IQE PLC and former CEO and Chairman of Cisco UK and Ireland
There is a lot of noise in the news about ChatGPT, which is a real application of Generative AI or Large Language Models or LLMs. This kind of technology is extremely powerful and could do amazing things for society, helping us to navigate information much more simply, to interact with intelligent machines in much more natural ways and even to build synthetic relationships.
How do we control, manage and even regulate this powerful technology and not allow it to be driven by some of the negative and divisive business models that we see in todays social networking platforms?
Richard Sansom
East Midlands Network Director, Cadent Gas
AI, fundamentally, ought to be a force for good. Organisations who have harnessed such supreme capability must gain the trust of the customer for sustainable outcomes. For me, complete transparency on; data accuracy, platform misuse, AI decision making and data security will provide confidence, and reduce any suspicion of bias for advantage unknown. To explain such complexities, efforts must be directed towards the translation of the risks, controls and data journey for the novice to understand. If not, mistrust will be present and implementation of government regulation will follow.
Jonathan Holyoak
Global Net Zero Programme Director, Atkins
Like most people I’m still trying to get to grips with the implications of Chat GPT. I can’t help but be concerned on the impact on secondary and university education without proactive regulation of some sort. Is some sort of digital watermark that makes it clear when AI has been used possible? That feels like a good start.
Dara Latinwo
Digital Transformation Consultant
This is the billion dollar question. Recent calls for a moratorium are unlikely to be the answer given the global nature of the AI race. Rather, more openness and auditability of this technology is a stronger way forward as this would allow for greater insight into where the risks of such technology lie and swifter resolution of these flaws because the collective expertise of the wider AI community could be harnessed.
Sion Lewis
Associate, Arup
Regulating social networking platforms is a huge challenge and we must learn and implement the lessons learnt. Regulating and managing powerful AI is step up. By its very nature AI it is continually learning and so can learn to combat the very regulations that are put in place to control them. Regulation to ensure and monitor ethical use from the outset is paramount but this needs to be flexible enough to adapt quickly to developments in AI. Maybe part of the solution is where AI helps regulate and manage AI!?
Gonzalo Coello De Portugal
Associate – Design and Project Leadership, Arup
We are asked to trust blindly that OpenAI will develop ChatGPT responsibly, ethically, and in a safe manner. But the code is not open, and the data used to train it is not published. Although regulation will fall behind, it is still needed. The controls inside the AI companies should be demonstrated transparently. This requires competent public bodies, specific policy, and ways to enforce it, which could be funded agreeing global taxes on tech firms. In parallel the companies should make available to third parties the required software to identify fake outcomes, biased responses, or malicious use.