Science fiction films in which man-made robots take over the world and the creator becomes powerless in front of them are classics of the genre. While in the middle of the last century the evolution of robots seemed like a distant and perhaps unattainable future, in the last 10 years intelligent robotic systems have become an integral part of our work and daily lives, from artificial intelligence assistants to navigation on our smartphones. In addition, the Swiss technology company ABB plans to open a fully robotic factory in Shanghai, China, later this year, where robots will make robots, creating the most advanced and automated “factory of the future“.
With the rapid development of robotics and automation, the ethical and safety of robots has become an increasingly pressing issue, with the need for new cultural and ethical canons and legal frameworks. The most serious debates are about the use of robots in the military, in medical decision-making, in the management of social processes, in self-driving cars, but in this article we will focus on the question of whether and how seriously an entrepreneur who is not involved in sensitive industries and wants to start automating processes in his or her company for productivity reasons should think about the ethics and safety of robots.
Rinalds Sluckis, Chairman of the Board at Digital Mind AS, regarding automation related aspects that become increasingly important as automation takes over more of our daily lives.
Who is responsible for the ethical and error-free operation of the system?
There is a great deal of debate among researchers about whether a robot can have any morality at all, or whether the software developer or the user should take care of it. There is a distinction between operational morality, where the responsibility lies entirely with the robot’s creator and the user; functional morality, where the robot is able to make moral judgements without human guidance and the robot’s creator can no longer clearly predict the robot’s actions and consequences; and full morality, where the robot is so intelligent that it can autonomously choose its actions and is fully responsible for them.
Although fully autonomous robots are planned for the future, at present they are either fully guided or only partially autonomous. Currently, we are still mostly talking only about operational morality, where the responsibility lies with the system designer and the user and bad results or malfunctioning of the system cannot be blamed on the technology. For example, if a company wants to introduce robotic process automation, professional cooperation between the software developer and the customer is essential, i.e. clearly defining which processes will be automated, whether they will contain sensitive customer information, how data protection will be ensured, etc. In this case, the whole process will be human-led, but it must be done professionally so that the robot can do its job well. If robotic systems are given the ability to make a decision for a human, the quality of the data, the development of accurate algorithms, testing to avoid wrong decisions, stereotyping, is very important. Establishing good communication between the technology creator and the customer is crucial to achieving an excellent result.
As Jekaterina Stuģe, CEO of Amber Beverage Group, pointed out in the discussion on robot ethics, the main added value of robots today is to free human resources from difficult and time-consuming routine tasks, allowing them to focus on work with more added value. The robots used to automate the company’s daily processes have clearly defined functions, each step is well-defined and all the difficult development decisions are still left to humans.
Keeping systems safe is an ongoing process
Data security and availability is a very important element to take care of. Before automating business processes, it is essential to identify potential security risks. It should be emphasised that large technology service providers are currently able to invest significantly more resources in systems and personnel, ensuring a much higher level of protection and compliance with security requirements than the average Latvian company, so it is important to trust professionals. They will help define potential security risks and develop the appropriate software to address them.
It should be remembered that automation of processes does not automatically mean higher security risks, rather the nature of the risks will change. Moreover, process automation eliminates a number of pre-existing issues: robots perform their tasks literally, unerringly and without deviating from them on the basis of personal biases, fatigue or lack of time. Unlike humans, they work 24/7 and do not take holidays, and can eliminate or minimise corruption risks.
The development of robotic process automation is an ever-changing process, not a one-off activity, as technology is constantly evolving, so concern for the safety of systems is an ongoing process.
Are robots competitors to labour?
When it comes to process automation, the question is how ethical it is to create and deploy such intelligent devices that can compete with the mind of a living human being and possibly create a huge wave of unemployment in the future. This issue has been much debated over the last year in particular, as companies have made rapid technological improvements to avoid direct human contact in the Covid-19 environment, and studies show that the lower skilled workforce may indeed face the problem of unemployment. By the mid-2030s, it is estimated that 30% of jobs could be automated, affecting 44% of the low-skilled workforce. In the coming years, the biggest impact could be in sectors such as financial services, where algorithms can perform faster and more efficient analysis and risk assessment, but in the longer term, the development of self-driving vehicles could mean a very radical impact on the transport sector. However, the latest estimates from the World Economic Forum suggest that artificial intelligence will create more jobs than it destroys. It predicts that 85 million jobs could disappear by 2025, but 97 million new positions will be created, requiring the upskilling or reskilling of workers. It should also be mentioned that in the Baltics, given the negative demographic situation, a major wave of unemployment is not expected, rather competition for well-qualified workers will intensify.
Although up-skilling is not a compulsory obligation for employers, it is part of a socially responsible company’s human resources management policy to ensure professional and qualified employees for high performance. Therefore, businesses should plan their automation processes well in advance, plan the retraining of employees and inform them of the planned changes. Perhaps employees who have been doing routine tasks can be given more profitable and relevant tasks for the business, with appropriate training. Failure to implement automation processes, providing employees with meaningful information and responding to their concerns, may create a negative working environment in the company and be reflected in performance. Employees need to be reassured that new software is being introduced to help them, not to jeopardise their jobs. This initial clarity will help to reduce anxiety in the workplace and ethical concerns about the welfare of employees.
Business process automation is ethical when it is done to improve the working environment; it becomes unethical when used inappropriately and unethically. Compliance with ethical principles and safety measures is not only a moral responsibility, but also the foundation of a sustainable business and corporate reputation.