Technology And Public Perception Of Risk

Technology And Public Perception Of Risk

The general public typically has a rather low appreciation of risk brought on by technology and have an almost impervious faith that technology will cater for every possible scenario. Facebook and autonomous vehicles have recently proven that "impervious" and "low risk" for fast moving technology do not go hand in hand. As someone who has had to tediously design systems that are complex, have the capacity to kill people and put in place measures to try and minimise the likelihood of this occurring I can honestly say that the process wasn't "fast moving". If you did not already know, any technology that has the potential to kill must be designed as safe as "practicable"... this word is linked to a statistic which outlines something that most people are oblivious to...this is whereby a company can acheive an outcome unless it is too expensive to not kill someone.

"So Far As Is Reasonably Practicable" is measuring cost against safety, from this risks and controls must be reduce the liklihood of failure that may result in death...notice how I didn't state eliminate. If I were to say that for a given design a death was "improbable" then I would be saying that there was literally a one in a million (10 -6) to a billion (10 -7) chance of this occuring...which effectively states that someone will die just less frequently. Designs can exceed this figure but based on this figure this means that as a minimum each autonomous vehicle will theoretically kill one (1) person for every 1 million to 1 billion emergency stops...or less if designs do not consider the risk improbable, don't satisfy fail safe criteria or fail in a manner which were not accounted for in the design. Technology is not a magic bullet, the quality of code varies from person to person, engineering skills vary from engineer to engineer and ultimately to date I have not seen a single project design complete, proceed through testing and operate without issues being identified which highlights that no matter how good a design is, if the technology is new you can guarantee that issues will arise. Maybe look at the statistics of accidents involving autonomous vehicles (regardless of fault) based on the number of them on the road compared to normal vehicles, some of the accidents are just unreal (i.e. Telsa's autopilot death). If you can't create a safe evironment with non autonomous vehicles on the road then it isn't safer full stop and this is likley because autonomous vehicles behave in a way a human doesn't expect. Consider this, if an autonomous vehicles instrumentation fails then what happens? does the car stop where it is? What are the costs involved in maintaining or replacing components? What if an autonomous vehicle hits you in your car, on your bike or as a pedestrian? At what point do we start caring about technologies flaws? I'm pretty sure it is when technology fails us and we feel the consequences is exactly the moment the outrage kicks in when it should be right now because someone has already been killed.

Facebook, a company so large and complex is as efficient, agile and perfect as any other large company that exists today. There are always cracks, flaws, imperfections and an expectation that they do everything perfect is just not viable. Trusting a company as large as facebook with your personal information and expecting it to remain safe for eternity isn't realistic and governments can't fix something that moves as fast as a technology company, their processes and responses are too slow and involves too much compromise. The only way to ensure that governments can regulate quickly enough would be to slow companies down, ensure that regulations are written in parallel with companies who are innovating in ways that are contraversial. Any expectation that a company will do the right thing when they don't even know when or where they have done the wrong thing is an interesting concept. Ultimately the cloud is one massive target and it only takes time for someone to eventually find a crack they can squeeze through to access some level of might even appear legal such as with the Cambridge Analytica scandal. Think about your own life, do you know about every single activity going on in your phone right now? Do you know how your information is being used, where it goes? Do you remember every term and condition you have agreed to? So as an individual you expect a company which sprawls across the world to know the same about everything that is being done and protect information you have given them?

Technology can do alot to help us but it is also capable of allowing great untargeted damage on epic scales to occur. I feel sad for those of us who happily smile and tell me that it won't be long before autonomous cars are everywhere and become our only mode of transportation. If only you knew what that really means, the fear I have for sitting in a car, hoping the coders and engineers were the best job possible so when it fails, as I can only sit by and watch, the outcome of an event that plays out doesn't scar me for life or take mine in the process. Just think of every news report you've watched involving planes because the reality isn't much different except a plane doesn't have to worry about cars or other physical objects at 10000 feet in the air...I didn't have a facebook account until recently, I don't have facebook friends, I don't have any personal information on there, I don't have any is only there for our business and even that is too much for me. I don't trust companies who rely on large volumes of personal information easily because their business is to make money off of it, this is always going to come first. I have always seen technology as something to assist me but I am careful about the technology I use, why and prefer to hold onto skills they often hope to assume responsibility of (i.e remember the calculator and doing math in your head). I prefer to err on the side of caution because honestly once the worst outcome has happened the only thing a technology company can offer is an apology and some money but they can't restore your life to the last known good backup...

If you are interested in finding out about our business and what we can do for you then please feel free to visit our main website or contact us. Thank you for your time, for reading our blog post and it would be great if you feel the need to share or like our articles via one of our social media platforms with the @ActsIntuitively tag as applies.

Brent Webster
Technical Services Manager

Bunbury, WA

ActsIntuitively Website | Psychological Services Website | Shop | Digital Shop | Blog Home

Outbound Links:

  1. The Age - Nobody Knows How To Fix What Facebook Has Broken

  2. Mecury News - Tesla Autopilot Was On During Deadly Mountain View Crash

  3. ABC News - Uber Suspends Self Driving Car Tests After Fatal Crash

  4. YouTube - Dashcam Footage: Self-driving Uber Fatally Hits Pedestrian

  5. North West Telegraph - Footage Shows Terror On Board Qantas Flight From Perth, After Passengers Were Told To Prepare For Emergency Landing

  6. Safe Work Australia - How To Determine What Is
    Reasonably Practicable To Meet A Health And Safety Duty

  7. IEC - Functional Safety - Essential To Overall Safety

  8. RTSA - So Far As Reasonably Practicable (SFAIRP) Vs As Low As Reasonably Practicable (ALARP)

  9. Wikipedia - Functional Safety Standard IEC 61508 (Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems)

  10. Examining Accident Reports Involving Autonomous Vehicles In California

  11. National Highway Traffic Safety Administration (Select California) - US Crash Statistics Report

  12. National Highway Traffic Safety Administration - Quick Facts 2016