The Chancellor of the Exchequer, Philip Hammond, last week delivered the Budget in which he pledged to secure Britain's position as a world leader in technology and innovation. Amongst the tech spending highlights he announced £75m for AI, which although a step in the right direction, Darren Roos, president of the cloud division of technology company SAP, stated it may not be sufficient.
At present, according to the OECD Science, Technology and Industry Scoreboard, the UK accounted for just 1.9% of AI-related patent applications from 2010 to 2015, with research suggesting 70% of AI technological development is happening in Japan, Korea, Taiwan and China.
Anyhow, the AI funding announced by the Treasury is aimed at removing barriers to AI development, increasing the number of new PhD students in the field to 200 each year and supporting tech start-ups. At present a new tech start-up is registered in the UK at a rate of 1 of every 60-minutes and Phillip Hammond wants to halve that rate to 1 every 30-minutes. Additionally, a further £100m has been pledged to support 8,000 new computer science teachers, tripling the current figures to 12,000 alongside a new National Centre for Computing.
The vast investment being pledged to AI, clearly demonstrates we are now living in the Digital Revolution in which the simulation of human intelligence processes by machines is ever increasing. AI algorithms play an increasingly large role in modern society, though usually not labelled “AI”, from Siri and Cortana to email spam filters and social media features such as Snapchats facial filters.
From a business perspective, AI can make decisions autonomously without any human involvement, and has huge potential to bring accuracy, efficiencies, cost savings and speed to a whole range of formerly human activities and to provide entirely new insights into market and customer behaviour. Thus, it is evident that AI has the capability to transform businesses and the services and products they offer.
However, with technological advances and more perplexed AI focuses such as driverless cars and even battlefield robots, arises the question, where do we draw the line? For example - A self-driving car carrying a family of four on a rural lane spots a bouncing ball ahead. As the vehicle approaches a child runs out to retrieve the ball. Should the car risk its passengers’ lives by swerving to the side—where the edge of the road meets a steep cliff? Or should the car continue on its path, ensuring its passengers’ safety at the child’s expense?
There is no doubt that AI has numerous business-related benefits, but with the likes of Stephen Hawking and Bill Gates openly expressing concerns about AI and some clear ethical considerations, where do you think we should draw the line?