Skip to main content

krishanjethwa
6th November 2018

Future on Trial: AI and superbugs

As part of the Manchester Science Festival, a court-room style debate occurred with, the two topics for discussion being artificial intelligence and superbugs
Categories: ,
TLDR
Future on Trial: AI and superbugs
Photo: Blogtrepreneur @ Flickr

As part of the Manchester Science Festival, two talks hosted by a panel of experts discussing two controversial topics: artificial intelligence and superbugs. Two questions were posed to the audience and experts were ‘brought to the stand’ to justify their viewpoints.

Being a relatively novel way of discussing two topics in science, it came across unexpectedly well. It had a courtroom style format wherein each topic had a question, then a prosecution and defence argued for and against it. Experts would serve as the panel’s ‘key witnesses’.

The first question posed to the audience was: is artificial intelligence a threat to society?

Artificial intelligence is a type of programme that ‘learns’ to solve a problem, and is starting to be used everywhere to help fix problems in every aspect of our lives. It looks at existing data for a subject, finds trends, and uses them to predict things for in the future, or suggest better ways things could be done.

Its ability to analyse large sets amounts of information with relative ease makes it part of a digital revolution, which is likely going to change the way we live. For example, Silent Talker is a new programme created that can tell whether a human being is lying or not by analysing facial cues, and could possibly be used by the police to help convict criminals. Police forces are reluctant to implement this kind of technology because they are unsure how reliable it can be, as previous trials in selected forces saw a failure rate as high as 98%.

Another example of AI is self-driving cars. The controversy arises when a possible error could result in human death. Could the programme operating the vehicle be considered to have made the correct ‘ethical decision’? If, for example, it could have either hit a child or an elderly man, you would want it to make the same decision a human would. However, in the situation, even human drivers couldn’t possibly be expected to make the right ethical judgements, so how can a machine be expected to be any better?

All these types of AI are seen as a form of ‘weak AI’. This is because after they have done their initial learning, they do not learn from any new data they receive. A well known and, somewhat loved, example of this is Siri, which has been programmed by Apple to respond to a set of predefined questions but does not consistently learn new things.

What is far more of a threat to society is ‘strong AI’, which constantly learns as it processes new data, meaning it is constantly improving itself. The startling ability of this type of AI was seen in 2017 when two AI chatbots were left to speak to each other through Facebook. Over time, the chatbots began to develop their own language that the programmers were unable to comprehend. This is not just a threat to our freedom but a threat to mankind, as this type of AI could easily evolve beyond our control.

One of the main problem facing AI is the absence of regulation. It could be used for more malicious things, like profiling the population for particular gains, or coercing the public. Take elections, with AI already being sued for micro-targeting (see Cambridge Analytica), it would be even more worrying if political AI made its way into social media. Imagine a scenario where instead of debating with other people on the web, you are actually debating with a chatbot which is constantly analysing what you say and how you think. Because of this, it knows exactly what, how, where and when to say something in order to change your opinion.

There is also no ‘code of conduct’ for how companies should design AI in terms of ethics, so some sort of government regulation on how it should behave could reduce the threat to society. However, governments have a difficult task of deciding what constitutes the ‘misuse’ of AI — and for many, even understanding what it is in the first place.

The second question posed was whether science can protect us from superbugs. Superbugs are bacteria that cannot be killed by antibiotics, which means we have nothing to treat it with. It is predicted that, by 2050, superbugs will be the biggest cause of death in the UK, and the lack of investment into finding new antibiotics is not encouraging.

Antibiotic use is increasing exponentially as more countries develop and seeking to improve the health of their population. Therefore, health organisations are asking doctors to try not to overprescribe antibiotics, as they believe it causes more antibiotic-resistant bacteria to evolve, as the bacteria must adapt to the new environment.

However, the central reason for discussion is based on whether the use of antibiotics has caused the resistance, or whether antibiotic-resistant bacteria have always existed. When looking at bacteria found in ice millions of years ago, biologists found that the bacteria also contained antibiotic-resistant genes. Another interesting point is that diseases like tonsillitis still have not become resistant to penicillin, despite being treated with it for decades, perhaps suggesting not all diseases can become resistant.

What is universally agreed upon, though, was that hygiene is extremely important in containing these diseases if they do appear. Ideally, a coordinated global response in proper hygiene codes in hospitals would constrain any antibiotic-resistant bacteria that pop up, though this looks rather unlikely. Other alternatives also look equally improbable due to the lack of investment in these areas that innovate strategies to counter superbugs. Large pharmaceutical companies aren’t as willing to invest in finding new methods of treatment as they still are rare cases.

Both topics are global problems and therefore require a global discussion and response.


More Coverage

Get to know: Who is Professor Duncan Ivison?

Nancy Rothwell is stepping down – who exactly is her replacement?

Disability and ethnicity pay gaps go up, gender goes down: UoM’s 2023 pay gap analysis

The gender pay gap at the University is at its lowest since 2017. The pay gap in terms of religion, sexuality, disability, and ethnicity has also been reported on

Manchester Leftist Action member speaks out against academic suspension

A student involved with action group Manchester Leftist Action has spoken out against his suspension by the University

University round-up: Redundancies, Student Publication Association awards, and Cops off Campus

This edition’s university round-up looks at university job-cuts, national publication awards, and pro-palestine occupations