Book Chapters

saints.jpg

Ethics For Hackers: Chapter: Harms

Table of Contents

1. Harms

  • What hurts people?
  • Loss or encumbrance.
  • Body. Personal physical sovereignty
  • Property. Time. Money.
  • Enjoyment. Health. Expectation.
  • Reputation. Dignity. Privacy.
  • Mills principles.

1.1. Harm as a central idea

To talk of ethics we must talk of harm. Harms may be done by us, or against us. It is an evasive and sometimes complicated idea. To "do no harm" is foundational in medical ethics, and yet harming tumour cells is the proper goal of effective cancer treatment. Similarly, a question we will constantly ponder in this book, is whether the harms done by digital technologies (and there are many), are outweighed by their benefits.

Common sense tells us that people and animals are harmed by physical injury or disease, loss of property or emotional trauma. But there is more to consider. Harm is a mental phenomenon, experienced by living beings capable of feeling pain and suffering. The nineteenth century writer John Stuart Mill devised his original harm principle Mill59 (later refined and formalised in Bernard Gert's Gert04 account), to claim that harms are caused by one person against another. Part of Mill's principle requires intentionality toward others.

Questions:

We might ask; what if a person consents to harm? Or is drugged to feel no pain and hypnotised to temporarily believe that being harmed is desirable? Does physical harm exist despite the absence of suffering? Or intent? Where do accidents figure in this? What if a person later changes their mind? Can an inanimate thing be harmed in the radical absence of sentient, feeling individuals? If one rock on an uninhabited planet crushes another rock, is any harm done?

1.2. Harm is a real, pragmatic thing

Our reason for starting here with harm is to sidestep a deep philosophical hole we might fall into. Distractions that traditionally derail enquiries into ethics, such as Platonic notions of "The Good" and other metaphysical musings, are best left aside. As hackers we need a solid working definition of harm, and preventing unnecessary suffering (which is known as the Negative Utilitarian position).

In western philosophy, some familiar thinkers are John Stuart Mill, John Rawls, Bernard Gert and Peter Singer. Though they do not by any means accord, these thinkers weave a practical, humanistic account of harm, and thus a foundation for our ethics.

Why disfavour abstract, metaphysical and meta-ethical subjects? Perhaps because their pondering is a luxury for civilisations facing less immediate crises than ours.

Worrying about the fate of yet unborn space aliens, as some followers of Singer's doctrine of "effective altruism" do, or constructing an intricate computational logic of value so that ethics can be optimised by "algorithms" are not what we're doing here.

Questions:

Is there such thing as "harm to the fabric of society"? Maybe. Can we do harm to our immortal souls? Who can tell? Might we damage the chances of future intelligent descendent species escaping the heat-death of the universe? Does it matter? Can we harm an "artificially intelligent" computer by powering it off? Who gets to say?

These are interesting subjects, make no mistake, and ones I've spent many years chewing over. But for me, after decades of this, they feel less relevant than ever. They are distractions that fail to address immediate threats to the freedom of humanity that form the main focus of this book.

Wispy thinking directs our attention away from real suffering toward ethereal wars on abstract nouns and woolly hand-waving concepts, into metaverses and conspiracy theories where invisible bogeymen and other weapons of mass distraction rule the day. If we are so distracted, that lets bad people off the hook. The justice element of ethics is usurped by academic navel-gazing.

Let's be brutally honest with ourselves; We cannot "hurt the planet". The Earth does not feel pain when we bury three billion mobile phone handsets annually. What does hurt us as humans when we run out of rare elements, is fighting wars over mining rights. It hurts people if we deny technological and life opportunity to future generations. When heavy metals leach out of landfills, they will cause birth defects and childhood cancers. These sorts of things are a very real and immediate concern, despite our ability to kick the can of "greenhouse gas and climate change" down the road for half a century.

Whilst abstract notions of "freedom" make great philosophy books, fascists booting your door in at 3am and arresting you for your sexuality, religion, ideas or associations, all because someone wanted to make a quick buck selling your private data, or was not competent to secure a simple database, are the subject at hand here.

1.3. Harms, welfare and digital worlds

Welfare includes a persons expectation of a happy, healthy life, and that it will be continued for the foreseeable future. It is the enjoyment of a reasonable interval of life with good physical health, a fair level of intellectual acuity, and emotional stability. Bernard Gert and Joel Feinberg both give interesting accounts of freedom from harm through a welfare lens with respect to what the state (and law) can reasonably provide Feinberg84, and we will now mention some of these ideas.

Fundamentally, one should have freedom from groundless anxieties, unreasonable coercion and interference, have a tolerable social or physical environment and freedom to pursue social intercourse and self improvement. We shall shortly explore each of these in brief in a more technological context.

Many would claim these are certainly not rights. At best, they are privileges of living in a civilised society. Mahatma Gandhi said, "The true measure of any society can be found in how it treats its most vulnerable members". Many privileges are obtained only by the cooperation and efforts of others. By that logic some would hold that nothing is really a right, including the right to claim you live in a civilised society. Our point here is that, natural limits notwithstanding, our welfare is afforded by others and good reciprocal, neighbourly relations. These set the moral tone of our lives, our industry, our nations and our whole world.

Ethically, the principle of welfare seems to be grounded in Kant's categorical imperative or the older Golden Rule - that we should want others to be treated as well as we are. But, according to evolutionary biologist Nichola Raihani Raihani21, cooperation stems from the genetic level. It's selected for by evolution. Cooperation helps selfish-genes get ahead in the world against other species, so it can be seen (non-paradoxically) as a kind of competition. Evidence of structured social relations, collective childcare and agriculture is found in Neolithic times, and institutionally organised social welfare has existed since at least the Greek Polis.

What values hold sway today? US/UK culture has shaped, and been shaped by digital communications technology. The twenty-first century "western" position, arising principally through the industrial capitalist work ethic of Anglo-Saxon Protestantism, has become dominant. But it now seems callous, dismissive of misfortune and lacking the will to community and shared values it once championed.

Despite producing immense material wealth, our historical context seems tragic. De Tocqueville and Montesquieu both wrote volumes on our formative civic nature, in Democracy in America DeTocqueville35. It was cohesive, convivial and collective in the face of a frontier existence. Even then, de Tocqueville clearly warned that the seeds of individualism would grow into social divisions.

As evidenced by the tribulations of the 1960s, before the internet was even born, our values had fragmented into a mutually antagonistic and suspicious sub-cultures. The internet allowed free, and often consequence-free, expression of that tension. Through the internet, these tensions spread far and wide, becoming a "global culture". For the most-part where we act in the world today it is through technological means which impact on the lives of others. Ethics on the digital stage has thus taken on new importance, because it it's inseparable from ethics in reality.

Though we think of digital networks as connecting forces, today we are experiencing disconnecting effects of mutual hostility and mistrust. Lewis Mumford, Marshall McLuhan and Neil Postman all anticipated this "reversal" of fates.

Like America itself, the once boundless frontiers of cyberspace (described in Barlow Barlow96 and Green Green02) new feel claustrophobic. Instead of a passion to explore and build new worlds together, we have all become uniquely sensitive to our perceived identity, property, encroachment on freedoms and the harms visited by others.

1.4. Externalities

An externality is harm inflicted by a system on individuals or upon the commons as a side effect of an activity. These are often related to zero-sum phenomena of the form; "Your loss is my gain".

Such harms are like "free-radicals" that have not yet bound to something. They are passed around like a hot potato until they land where they meet least resistance. These harms migrate towards the most vulnerable, those least able to defend, articulate or even be aware of them.

In complex systems the consequences are many steps removed from the actions, and may not even be understood. In a powerful talk titled "Cold Evil" in 2000 Kimbrell00, lawyer and environmentalist Andrew Kimbrell summed up:

We witness daily the way the modern corporation has become distanced in time and space from its actions. A pesticide company has moved to another country or even gone out of business by the time — years after it has abandoned its chemical plant — the local aquifer and river have become hopelessly polluted, fish and wildlife decimated, and there is a fatal cancer cluster among the families relying on the local water supply. The executives of a tyre company are thousands of miles or even a continent away and do not hear the screech of wheels and the screams as their defective tires burst and result in fatal crashes.

Digital industries are no strangers to inflicting morally harmful externalities on the world. We test systems on users, making them guinea-pigs in our UI experiments, without their consent or remuneration. We do not calculate the costs to their lives in time, frustration and lost opportunity when we force "updates" and "tweaks" on their systems. Changes that optimise systems tend to squeeze the harms to the edges, so that apparent efficiencies externalise the costs to others.

Alvin Toffler's 1970 Future Shock Toffler70 offers a criticism of Buckminster-Fuller's 'optimistic' conceit of technology as something with vanishing costs and potentially infinite efficiency (what he called ephemeralisation) - that the costs become psychological burdens on society.

A side effect of solutionism is - by hiding things under ever more layers of technology and abstraction - we end up with highly brittle, fragile systems with pathological complexity. Given the increasing frequency of digital service outages resilience has come back into mainstream focus. Neomania (progress for its own sake) as "Move fast and break things", is no longer cool. The reckless engineer is being brought to account for cavalier attitudes that have filled society with technology that Taleb likens to philistinism in his 2012 Antifragile Taleb12.

If the word "Luddite" is used as an insult to describe those who fear technology and resent progress, one must question who are the real "Luddites"? The ordinary person struggling in good faith to make their iThing work, and then rejecting its substandard working has far less contempt for technological progress than the Silicon Valley designer who made it.

Paradoxically then, some apparent gains in technological progress are an overall loss when seen at a larger scale. For example; Nations moving to "cashless" commerce imagine they are 'streamlining' payments, adding convenience and so forth; but rather, they are destroying vital forms of non-monetary wealth. A "cashless economy" erodes stability, social resilience of physical cash, appropriate privacy and anonymity for small transactions, community bonds, liquidity, visibility of spending, peoples' financial management skills, and much more.

Modern paper cash is already a more or less optimal technology. Having only digital cash is a terrible and shortsighted idea, but many developers agitate toward it because they do not see the big, complex picture of real life. The externalities and harmful side effects of their narrow idea of "progress" falls on everyone else but the few (mainly private banks) who obtain some "convenience".

1.5. One off, repeated and threatened harms

Harms may be distinguished by recurrence. One-off harms are unique, unrepeatable events. Crashing your car is an event. As humans we process events. We can grieve, adapt, and move forward. If a singular harm can never happen again, it's one less thing to worry about in the future.

On the other hand, if the neighbours dog dumps on your lawn, it is likely to recur. Such harms have a different quality. Consider spamming or trolling. Although minor annoyances, they are likely ongoing, as once a spammer has your email address, the problem will be spread. Some action is needed to block inevitable future annoyances that will sap your resources. Greater harm often comes from worry that something bad will happen again. Paradoxically, we are often able to forgive and forget big transgressions, but obsess over small ones that persist as threats.

As a significant event, if Google Maps shut down tomorrow, it would affect many businesses and people. But life would go on. Other services would spring up. In time, hardly anyone would remember Google Maps.

But if we find a serious fault in a protocol or embedded operating system, it becomes an ongoing harm. All existing systems must be patched or replaced. Old versions will continue to operate, creating opportunities to steal personal information and compromise systems. A small harm occurs every time someone uses the system.

When Sony BMG released music CDs containing malware for Microsoft Windows operating systems they committed not just a "hack" but an act of environmental pollution. Discs containing the malware continue to circulate even today.

In threat assessment we must ask whether we are dealing with one-off or repeating problems. The response to a one-off is to move directly to recovery and healing. The response to ongoing harm is urgent root-cause analysis with change in mind. Let's take the example of biometric credentials to illustrate this vital difference.

Data leaks of biometrics are essentially one-off. Apologies make nice public relations, but when a company leaks unhashed (or reversible) data any 'assurances' that it will not happen again are meaningless, because it does not matter if it happens again. Your fingerprints will not change. The response to having your biometric data leaked is to cease relying on biometric authentication systems forever, which are now compromised for all time. You get one chance in life to keep them secret.

Losing a password is something we can quickly clean up. We update with a new password. Quite possibly it will happen again, because passwords seem weaker as security devices, giving apparently more possibilities for being leaked. By contrast, advocates of biometric authentication imagine that such credentials are "stronger" because they are immutable.

An understanding of harm can help us understand this complex fallacy in security engineering - why passwords are actually a better security instrument than biometric ID. Counterintuitively, it is the immutability of the later that makes it weak (more harmful) in the long run. Biometrics are neither something we have nor something we know, they are something we are. They are instruments of identification and should never be used for authentication (these are not the same thing).

1.6. Measuring and responding to harms

We need to balance harm with reaction. Over-reaction can be costly. Sometimes the intent of the attacker is to produce an over-reaction, which is the actual harm. Being future oriented, we weight even small but potentially ongoing harms as much more troublesome than bigger infrequent harms.

Research shows that most minor nuisance is strongly influenced by the degree of control we feel we have. If you immediately confront the neighbour about their dog spoiling your lawn, they are usually apologetic and say it will won't happen again - then you can both relax. It may even improve your neighbourly relationship to have something to talk and negotiate about. Next time your dog is the culprit, it will be easier to apologise and talk about it.

But when people live in atomised, broken communities, if they never talk to one another, and only ever interact by appeal to authority like calling the police, then suffering and nuisance are much worse. A loud TV can drive someone to murder if the feelings of unresolved harm fester and escalate.

This aptly describes our neighbourly relationships on the Internet, and to some degree our real life relations as we hide behind smartphones. The original peer-to-peer structure of the network as designed by DARPA was hastily destroyed by the commercial ISPs and the music and movie business in the 1990s. We do not know each other as neighbours. We communicate through centralised intermediated nodes and are generally too frightened to directly converse. Whole cyberwars could start over an errant ping packet.

In security research, measuring risk, harm and consequences is important. Another thing that must be "measured" is response. If we over-correct against threats or actual harms too strongly, we create new problems.

Slack, also known as soft-response, acknowledges that many harms are marginal and we should build in some tolerance. Systems with slack are much more robust and fair. Good law seeks to distinguish significant harms and values proportionality, whereas 'summary justice', particularly of the automated kind, is binary and indiscriminate. Technocracies are invariably brutal and without equity. Speeding tickets are issued equally to drunks, or mothers rushing a sick child to hospital.

When we use a word like "marginal", care is needed. Economics and politics tend to push harms to the margins. It is not ethical if, in some utilitarian equation, we can reduce overall harm by shifting suffering to the poor, disabled, to race minorities, to the old or onto children. We must also be wary of variance in weighing of perceived harms. As with determining levels of pain in medical diagnosis, gauging harm, whether observed or self-reported by the victim, can be difficult.

There are grave harms experienced by victims who never even realise they have suffered, or if they do realise will minimise the effects. Or they are just tough. In ancient Greece, Arete was a quality describing a morally strong person who does not concern themselves with the petty snipes, slings and arrows of 'their lessers'. Arete means rising above provocation and showing magnanimous excellence in the face of challenges like changing fortune, moving on, letting it go, learning, and turning misfortunes into strengths.

An equitable person today must live by modern "Arete". They are affronted a dozen times a day by impudent and inhumane systems that insult, rob or disrespect them. They could legitimately complain, or sue, but choose not to. However, the systems, "algorithms and AI" which harm living persons cannot themselves display equity. This means that in any relation between a human and a bureaucratic algorithm or "AI" there is an ethical imbalance and asymmetry.

By contrast, there are perceived harms that some people go to ridiculous legal expense and effort to address, causing much stress and lost productivity for themselves and others. Piffling distractions, and imagined offences are elevated to high drama. In relation to over-sensitive individuals this has been dubbed 'the snowflake culture', but equally it applies to businesses too ready to attack others with lawsuits and takedowns. A culture where legal friction is profitable creates perverse incentives and fuels belligerence. The ability to automate this at zero-cost, itself becomes a threat to society.

In the digital realm, the ease with which complaints, takedowns, blockages, and information requests can be automated by bots creates an imbalance of power. Bigger companies use ostensibly well-meaning legal instruments to beat and bully little-guys who lack the means to respond. Government regulation alone is therefore insufficient and actually harmful unless individuals are empowered with a greater digital strength than corporations and governments.

In assessing harm one must ask "what can reasonably be done?" There are 'things that nobody can really do anything about' without addressing the root causes. No remedy or response at the symptomatic level is worth any effort. Graffiti under the railway bridge and some level of litter in the park are part of every city. Of course, "someone could take a stand to clean up the neighbourhood", or invent ridiculously over-complex defences. But they don't, because the necessary response would be more harmful.

Consider the average server using a password authenticated secure shell. It is subject to an attempted break-in every few seconds. Imagine if your actual street was like that. Perhaps there is some alternate universe in which every network packet is policed and every threat taken seriously. Today, exposure to such quiescent hostility, to potential criminality, is considered the price of being on the Internet. It is normal wear and tear.

Similarly, nothing ruins a neighbourhood like putting CCTV on every street corner, picket and post. To live in a place where police drones buzz overhead, neighbours' camera doorbells watch you, and your every step is tracked, makes for a vile an intolerable environment. It is inarguable that in many western cities, London for example, we have already long passed the point where the response to potential criminality is the harm. When a small paranoid minority (and those who exploit them; politicians and beneficiaries of the security-industrial complex) are given a disproportionate voice, harm is done to the silent, more tolerant and robust majority.

It is important to remember that thresholds of action and tolerance apply dynamically. Systems tend to establish equilibrium, where the push-back and the effort are in balance, Biological systems are great at adjusting thresholds. If people were continuously active against all threats we would die of stress quickly. Colloquially, it's about "choosing your battles".

Modern technological systems are brittle because they are balanced on the brink of failure by rigidity, management ideologies, just-in-time supply lines (JIT), and 'efficiency'. No matter how precise and continuous underlying quantitative arithmetic may be, they tend to exhibit discontinuous, binary behaviour, creating dichotomies rather than nuance and degrees. For example; a vision recognition system must ultimately classify a face or licence plate into a specific datum, present or not-present.

As an example, the music recording industry took to a principled 'hill to die on' over file sharing, to the point of an ideology. It redefined the word 'piracy', bought laws, hurled abuse and disinformation, wreaked immense damage on the internet and ultimately destroyed itself.

Never during the copyright wars was there any talk of minimisation or tolerance. Every reasonable adaptation forced upon them by culture and competitors was responded to with an entitled tantrum. Meanwhile their business model was propped up by state subsidies in the form of protectionist law. Thousands of people have gone to jail or even been killed over an obsession to turn copyright infringement from a rare civil tort into a federal crime with industrialised shakedown apparatus.

Analysts later concluded that so-called 'file sharing' had no substantial impact on sales and even helped the business. Had the entertainments industry readjusted its quality, prices and delivery methods it would have survived the rise of Apple, Spotify and Netflix. Instead they got dug-in to a trench. They played the victim, claiming that declining sales were really 'thefts' and engaged in a thirty year-long ugly and pitiful episode.

On a positive note, this misadventure to control development and dissemination of technology for sharing music and films stimulated astounding innovation in cryptography, confidentiality, repudiation and anonymity preserving systems. Without the fragility of the music and movie industry we would not have much of the technology that keeps us secure today.

But wars have a way of scorching the earth. A subtle harm has been cultural. The idea that businesses should be engaged in technological arms races to outwit their own customers, rather than innovate value, has become normalised. What we increasingly see are knee-jerk reactions to perceived threats. We live in a problem oriented as opposed to an opportunity oriented society, where we try to 'solve' things on a technological level, and where there is always 'an app for that!'.

Compared to what technology could offer, we have grown used to a regressive, inhibitory environment where progress occurs despite not because of big business. Technology enables rapid, ill-considered responses to all and any apparent threat to 'the bottom line'.

Given that so many businesses are predicated on short-window opportunistic ideas, and competition is so fierce, we see effort squandered on tit-for-tat technological oneupmanship. We get defensive patent hoarding and hostile acquisitions to 'embrace, extend, and extinguish' rivals. In a culture of advanced victimology, being 'deserving of special protections from authorities' has replaced a robust, tolerant, common-sense can-do culture. The result is a polarised, vexatious and litigious environment where innovation is stifled.

1.7. Physical harm through technology

Physical harms are defined by Bernard Gert Gert04 as; pain, death, disability, disfigurement, loss of ability, freedom or pleasure. He also notes that these may be immediate or gradual, individual or collective.

There are plenty of ways to use computers to achieve those. For decades computer ethics was constrained to thinking about information hazards and maybe the indirect ways that physical harm might arise. To physically hurt another you needed to pick up a heavy computer and hit them over the head with it. At the periphery of that rarefied model of computer hacking was always the 'possibility' that one day we might connect computers with free-roaming robots or nuclear power stations, but along with that discourse came the tacit sense that nobody would be stupid enough to do such a thing.

In today's world of autonomous robotics and Internet of Things (IoT), weaponised drones may seem the most obvious means of directly inflicting physical harm through digital technology. But insulin pumps, power grids, transport and farm machinery are all being hooked up now. Hacks where the outcome of flipping a bit entails high speed collision, explosion, mass poisoning, or other spectacular loss of life, is a modern reality.

Physical harms of a sedentary life and potential accidents due to distraction deserve some mention. Phones, tablets and desktops are frequently used for 8 or more hours per day Pew14. Effects of static posture, strained eyes, repetitive movement, and lack of healthy blood circulation all take their toll. Although modern office workers are exposed to measurably less immediate risk than historical labourers, industrial injury and exploitation of workers' health remain in new forms.

The statistics on fatalities and injuries due to distracted driving are shocking. At the time of writing we have almost 10 years of data, back to 2003. In the US alone roughly 5000 people die each year as a result of using digital technology in dangerous circumstances, which includes driving (3000), cycling (1600) and simply walking about (400) while immersed in a cellphone screen. Over a million non-fatal accidents are directly attributable to cellphone use. The figures are rising. About forty percent of Millennials believe it is "okay and safe" to use cellphones while driving.

Lastly, let's consider the physical effects of stress. Despite the wish of developers that our products are intuitive, fun and easy to use, the reality is far from. Digital services are experienced as tedious, or more often "blood-boiling" and infuriating. Poor UI design remains rampant, ranging from simply inadequate, to "dark patterns" which are deceptive, rent-seeking, and gaslight their users.

A daily eight hour dose of disabling frustration, work performance anxiety, social comparison, fear of missing out and surveillance fatigue has a terrible impact on levels of stress hormones and neurotransmitters. These effects become physically manifest through the effects of stress chemicals on our hearts, joints, blood pressure and through the secondary effects on sleep dysfunction.

1.8. Irresilience as harm

In fiction, Lasker and Parkes 1983 Cold War epic "War Games" depicts nuclear missile systems insecurely attached to a computer network Badham83. More recently, Sam Esmail's Mr. Robot explores the collapse of society following a deliberate hack on financial systems Esmail2015.

In reality, a more likely scenario than either are food riots or civil war following the spontaneous collapse of technological systems as a result of reckless over-dependence, mandated use and near-monopoly monocultures.

Resilience engineering is a mature but barely recognised discipline within security thinking that deals with this. The key observation is that harm does not result directly from technology, but from its failure. The attendant moral question is; do we invite or contribute to harm by becoming, or encouraging others to become, dependent on precarious technologies?

Possible triggers of systemic collapse include war, disease, computer malware, intrinsic hidden faults or "logic-bombs", environmental catastrophe including a Carrington Event (electromagnetic solar storm) and threats from General Artificial Intelligences (GAI). Malicious hacker attacks are fairly low down the list of risks. Though these are triggers, it is the underlying precarious monoculture that is the avoidable fault.

Many trajectories in digital technology today are dangerous. The "official security assurances" that support them are dishonest and are primarily to underwrite growth of the digital economy. Fragility and resilience issues should be viewed as hidden harms or accumulated "risk debts" in much the same way an overweight person might view their health. A lack of visible, symptomatic harms does not mean there are no harms.

1.9. Deferred and diffused harm

Legal accounts of harm generally deal with relations between two people, who are present. Rational contemplation, "intent", or in law mes rea (the 'guilty mind') are central concerns. We assume that a perpetrator understands the more or less immediate consequences of an act.

But in reality, most harms have many contributory factors, and happen through a chain of events. The law introduces Negligence as a key concept, which in turn invokes the issues of duty of care and predictable likelihood. Killing someone while texting and driving is more than an accident, it is at least criminal negligence in the hands of the operator. Yet when the control systems of a jet-liner fail and kill 300 people, where does responsibility lie?

Related concepts are deferral and dilution of harm, and diffusion of responsibility. What kind of harm is stealing half a penny from a million bank accounts (dilution)? Or dumping waste into a river that other polluters have also spoiled, which contributes to causing illness thirty years later (deferral)?

Many ethical issues in technology are sidestepped because we retreat in the face of complexity. Systems, and those responsible for them, are in constant churn. Systems are inter-connected, inter-dependent and coupled in labyrinthine ways. No software engineers, let alone judges, can fully understand their operation. In this sense, digital technology is a force that has gone outside of social and legal control.

For example, regarding the terrible danger of IoT; Given that "we" (in the widest sense), always knew of grave possibilities but have collectively chosen to persist in a dangerous game, where does the locus of responsibility really lie? A utilitarian balance of "supposed benefits outweighing harms" seems insufficient to answer this. If humankind insists on building a death trap for its own stochastic suicide, what weight can be placed on the inevitable accident or disgruntled individual who finally presses the button?

Our point here is to reckon with the diffusing effect of technology (See: Lewis Mumford Mumford34 and Andrew Kimbrell Kimbrell00 ). It dislocates responsibility for harm in time and space.

Indeed that is part of its attraction. One definition of technology, according to Swiss playwright Max Frisch is;

'Technology is a way of arranging the world that we don't have to experience it.'

We avoid the world by experiencing it as action at a distance, vicarious effect or intermediation. This feels like a kind of power, and so is seductive. It feels empowering to be absolved of responsibility, to unload the burden of care and consideration of consequence onto a machine, algorithm or system that can be blamed. Digital technology becomes a form of systematised negligence.

1.10. Harms to the emotional life of a person

Though I think that statements such as "digital technology has become essential to our lives today" are wrong, and indeed dangerous, it is a fact that for many people digital interactions have replaced human activities, as a means of communication with friends, of maintaining family life, of working, and recreation.

Good mental health requires a proper balance of stimuli and drives identified by figures like Maslow, Beers, Adler, Freud, Rogers, and Bowlby. We need a balance of regular human contact and private time alone, exercise and sleep, logical and spiritual thought patterns, arousal and relaxation and so on… congruent with the middle path of Greek virtue ethics.

Much focus has lately been given to the malicious hardware and software created to addict, captivate, control and frustrate users for profit. Companies like Facebook and Google shamelessly deploy so-called dark patterns of design; perhaps we are looking to scapegoats for deeper problems with our relation to technology that we are reluctant to admit.

We think of digital technology as a magnifier of expression because it amplifies reach and extends persistence, via communication and memory respectively. Yet our use of digital technology actually limits our receptive and expressive modalities. Physically, it constrains us to just finger and eye movements. For many its use is so mechanical that it reduces expression to clicking 'like buttons' and scrolling while gaze is fixed on a few square inches of screen area. This is very bad for our brains. We now know that a variety of mental stimulation is negatively correlated with dementia. Though modern humans may feel "connected to the whole world", our mental landscape has never been more limited.

Constant surveillance creates a different pyschological harm. It destroys our inner life, creativity and connection to others or distorts behaviour into compliant, ritualised displays or narcissistic exhibitionism and self-obsession. These limited and scrutinised microcosms lead to poor mental health and depression.

Emotional harm is very real and recognised medically as the cause of other outcomes like shortened life, illness, and suicide. It is fair to say that creating a world in which we are increasingly dependent on digital technologies is a harm in itself. This is true regardless of the claimed merits of any technology or any 'safety features', policies, or assurances of checks and balances.

1.11. Flattened affect

One emotional harm might be that we don't have any. A person with blunted, restricted emotional range or as Berne put it "no contact with what is really going on" acts in a machine-like way Berne76.

Humans becoming machine-like is of equal danger to machines becoming like humans, a theme explored by Nicholas Agar who notices diminished expression in persons excessively dependent upon technology Agar10. It is a common theme in science fiction identified with zombie like characters who speak in monotones, have low facial expression, use cold language and move in an awkward mechanical way. Charlie Chaplin parodies the victim of industrial automation in Modern Times Chaplin36. In reality it is linked to brain injury, schizoid traits and to post traumatic stress disorders (PTSD). It often accompanies anhedonia (not getting pleasure out of normal life).

In recent years low affect has raised another interesting question in relation to so-called "AI". Chatbots using large language models have surged in capability, leading some to claim they can pass the Turing Test or even exhibit actual sentience. The reality may be that it's not machine learning systems that are getting better, but that human intellect is collapsing. A combination of declining reading, short attention spans, self-censorship in "politically correct" cultures, declining confidence in "truth", plus constant immersion in online technology is causing greatly restricted affect and emotive vocabulary.

Any systems that limit your exposure to novel stimulus and ideas, by "information bubbling", or systems that limit your ability to express, by censorship or surveillance, can lead to diminished affect. Corporate or military environments soaked in shallow euphemism and managed speech can have this effect. Indeed the intended outcome of Newspeak in Orwells Nineteen Eighty Four is to constrain political thought Orwell49. Henry Giroux describes the effects of one dimensional education and empty media content in creating Zombie Politics Giroux10.

There seems an obvious link between digital technology and low affect. But not enough research has been done at the time of writing (2021) to know whether flattened affect results from the physiological impact of smartphone devices (craned necks reducing cerebral blood-flow), or the content of social media, or some other symptoms related to technology addiction or anxiety. Nonetheless, it seems hard to ignore a widespread deadening of human emotional range amongst those who are heavy users of smartphones and social media.

1.12. Continuity and commitment

Happy relationships are built on stable availability. Give a child a toy and some time to get used to it and they become attached to the object. Now take it away, and the resulting distress shows that even a child understands that interruption to continuity of enjoyment is unwelcome and unfair.

As toddlers, we don't understand the seemingly random things our parents do. Life seems arbitrary. But as adults we expect to exercise informed control. Adults seek control, and often turn to technology as a way to get it. That is a big mistake unless you own and fully control a technology yourself.

Modern technology is increasingly beyond its owner's control. Governments and companies behave in strange, patronising ways towards people where technology is involved. There is a widespread, ineffable prejudice that "people are stupid" and must be told what to do around computers.

Despite decades of digital literacy education and a narrative that "technology puts you in control", a suffocating 'Mother knows best' attitude lies just below the surface. An array of apparent choice hides the reality that most of us must accept what we are given. We assume that 'experts know best'. When it comes to digital systems, it turns out they seldom do.

Digital systems vendors make arbitrary changes to the tools we need for life. We disregard their lack of adherence to advertised function, quality, and fitness for purpose in ways we would never tolerate for tangible goods. Choice based on informed consent is an illusion in a push-economy where marketing supplants information and education. The very technology that could inform us is owned and operated by those who sell it to us. Such an arrangement permits opaque power to go unexamined.

Sudden and arbitrary change to our technology, to its availability or pricing is a rampant harm. Though 'availability' is recognised as a goal in informatics, we should consider long-term stability a first class goal for civic use. For tech companies, "move fast and break things", and being first to capture a market is everything. This is incompatible with our increasing need for reliable digital systems.

We protect data by making backups, to avoid the pain of losing treasured photos or an unfinished college thesis. But less attention is paid to continued availability of services and capability. They are distant. We make them "somebody else's problem". The whole point of "The Cloud" is to cement this disconnection.

To use a computer program, whether locally or as a cloud service, is to make it a part of ourselves. What is at stake is our cognitive investment. Each relationship we build with an application or service requires a personal investment of time and learning, perhaps many thousands of hours. Unless we have some stake in its care, it can be taken from us at a whim, or perish from neglect.

There is a website called Killed by Google, a graveyard in remembrance of hundreds of once loved applications dropped by the company. Or perhaps consider the events of 2019 when much of Venezuela's creative industry ground to a halt following US president Trump's executive order 13884 forcing Adobe to disconnect the cloud services of an entire nation. At best, the loss of an entertainment app may cause us to tut and curse the vendor. At worst, a software upheaval can end a career, destroy a company or lock a person out of the economy entirely.

While preparing a retrospective for the 20th anniversary of the 9/11 World Trade Centre attacks two CNN reporters Clare Duffy and Kerry Flynn were frustrated to find many thousands of hours of journalistic work destroyed by the demise of Adobe Flash. The authors claim that "some of the most iconic 9/11 news coverage is lost forever". It is not the data that has been lost, but the ability to read it, like losing ones spectacles rather than a notebook of important information. Whereas the "Memory Hole" of Orwell's Nineteen Eighty Four was the work of manipulative government, we've allowed something worse to evolve through low-commitment ephemeral systems.

Digital technology allows very fine grained and rapid control of resources, and thus creates new harms. One example is employment. Fluid 'at will' employment where people live day to day with the prospect of being fired reduces us to interchangeable cogs.

But a pendulum swings both ways. The dynamism created by technology means these same harms of low-commitment are also visited on businesses who cannot hire loyal and dedicated workers. The phenomenon of ghosting and The Great Resignation now confronts businesses. People just decide not to turn up for work one day, and don't even respond to calls. Why stay at a company when there is a better offer? Why not steal the client list and trade secrets from a company that has shown you zero loyalty? Why even bother to call to cancel a job interview if another one works out? Digital technology enables us all to be disloyal and callous through its dehumanising effects.

When people are treated like inert commodities we treat others the same way. Many romantic liaisons arranged by Tinder or similar dating apps end within minutes, if while having dinner with a prospective date, the other is still swiping at their phone in case a better offer comes up. The concept of commitment is greatly harmed, not specifically by digital technology but by its amplifying effect on the forces of:

reification Turning ourselves into saleable objects
alienation Disconnection from things that matter
atomisation Being separated from each other
intermediation Always dealing with middle-men

Our model of digital technology cultivated in Silicon Valley encourages all these things. Their business models make technology that insinuates itself in between the lives of others in a way that casualises their thoughts and actions, encouraging them to compete for attention or resources within a market of ambivalent disinterest.

The result is a churning mess of low-commitment relations. We get Uber rides to the airport that simply don't turn up. We get important medical deliveries lost in transit, and shrugging "customer service" that "can't help us". We get our accounts and services closed and locked out without explanation or recourse.

1.13. Intellectual life

Sir Tim Berners Lee imagined the Web would expand intellectual life. What is intellectual life, and what can harm it? According to Gert, human welfare involves a right to intellectual acuity. What does that rather odd statement even mean? Horace Mann addressed this in the mid 1800's when building the American school system. "Uneducated people are incapable of participating in democracy", he said, in The Case For Public Schools in 1850 Mann50.

In short, let's call it "free access to undistorted information", to literature and science, and the opportunity to study and discuss it. It's a value echoed by voices as diverse as William F. Buckley, Noam Chomsky, John Adams, and Martin Luther King. Western liberal democracy is predicated on decently educated, well informed citizens who are permitted diverse views.

The aesthetic pseudo-democracy we have today is a result of deliberate harms done to education and information systems in pursuit of "dumbing down", a term popularised by John Taylor Gatto in his 1992 book Dumbing us down: the hidden curriculum of compulsory schooling Gatto92.

Imagine a "conspiracy theory" to deliberately sabotage the global education system so as to eliminate critical thinking in order to keep industrial stability. This sounds like the villain's "take over the world" speech from a James Bond movie? Yet it's exactly the plan set out by the Trilateral Commission in the 1970s report The Crisis of Democracy, a blueprint for managing the governance and long term welfare of industrial society in order to preserve the status quo Tril73.

Equally the Powell Memorandum of the same era minces no words setting out a strategy to infiltrate the political, legal and educational system with 'enterprise values' Powell71. To be charitable, what these idealists missed was that such 'values' are paradoxically at odds with the conditions necessary to sustain Western economic prosperity. Facebook is the poster child exemplar of how profit for the few distorts truth necessary to sustain the many.

Advocating ones ideology is not a problem in itself, and the 'elites' are as much entitled to their machinations as anyone. Powell and the authors of The Crisis of Democracy were justified in their desire push back against leftist values taking over institutions, according to the "Long March". But they were blinded by their arrogance in imagining they understood complex systems enough to control them by damping innovation.

Rather than increase intellectual and political activity on the right, and let their ideas win through the force of the better argument, they set out to suppress the means of disputation and attack institutions they saw as "left wing". The legacy of this is a lasting connection between the political right and anti-intellectualism, which is ironically reversed today as the right, in new populist guises has emerged as champion of unfettered speech in the face of left anti-intellectualism.

Neither of these factions, nor their founding documents anticipated the Internet. Much of the political counter-attack against peer networking, such as piecemeal censorship of political thought within centralised social media can now be understood in light of the 'democratic overload' theory circulating amongst the 'elites' in the 1970s. Without doubt, Twitter and Facebook are speech management tools for the warehousing and nudging of public discourse, but the unresolved battle is over who gets to do the managing.

It is transforming into a different debate today which uses 'decency' as its fulcrum. The contentious ground of 'psychological harm' and 'intellectual harm' will become footballs for both sides to chase in pursuit of stymieing free speech inconvenient to their own agendas. At the time of writing the UK "Online Safety Bill" is under dissection along precisely these fault-lines.

The tragedy is, there were so many things we could have started doing between the 1970s and 2010 through the emerging Internet, for example; to curb climate change, and to plan for post-industrialisation. The hostility toward thinkers like Rachel Carson and Dana Meadows were the fruits of the anti-intellectual, anti-progressive atmosphere fomented within the Trilateral Commission and other reactionaries of that time.

They show why attacks on 'intellectual acuity', whether as affordable education, access to information or forums to debate, are always a harm. They are harms to us all whether they are perpetrated by the political right or left keen to label what they don't like as 'hate speech' or 'fake news'.

1.14. Stability and truth

To further explore Feinberg and Gert's ideas let's consider emotional abuse and consequent instability as harms. How do we as developers ameliorate or add to the problem? A topical issue is that of disinformation, or it's more recent, popular term "fake news".

Truth is a valuable but fragile construct. Without venturing prematurely into epistemology we can see that provenance of sources, veracity of systems and good faith of interpretation play their expected roles in the digital world as in earlier times. Digital technology, through cryptography (signatures and hashing) and its capability for accurate, high-resolution reproduction, appears to only favour spreading of truth. Yet half-baked popular tropes like "the camera never lies", or "computers never make mistakes" have created our misplaced confidence, allowing technologies to be repurposed for spreading believable lies.

Truth also includes "emotional truths". An image of a crying, starving baby has an emotional truth to it, whether it's an authentic photograph or a faked computer generated image. Advertising, since Bernays work on Propaganda, commonly leverages mild emotional abuse, to make people feel inadequate, and then put forward products as solutions Bernays23,Bernays27. Similar techniques are endemic in workplaces that foment insecurity to obtain effort and drive down wages. These forms of disinformation feel comfortably woven into our culture.

Of course life is a struggle and lies underwrite power. Some people are more or less credulous or emotionally unstable, while others are sceptical and confident. Emotional vulnerability may also be down to underlying mental health issues or past events. Nonetheless today, perhaps due to the sheer volume of poor quality information we are bombarded with, it seems difficult for even a vigilant, robust and critical person not to be disaffected or driven to the depths of depression by the digital environment. The neoliberal's retort that we can exercise choice is over-rated unless one is able to cut oneself off from friends and society, through whom insidious ideas replicate.

It's understandable that in times of war, psychological operations (psyops) are used to undermine enemy morale. But psyops outfits are forbidden from "domestic operations". That's because in civilian life, to seek out weakness through surveillance with the intention of leveraging it against a target is ethically reprehensible. Yet, as Shosana Zuboff explores, digital technology has made societal cannibalising of its own self-esteem efficient and profitable as so-called "surveillance capitalism" Zuboff19. Surveillance capitalism is now a war of western society against itself.

The line between advertising, political influence and seditious propaganda is hard to see. A line is crossed where organisations or governments go out of their way to stoke fear, engage in threats, harassment, gas-lighting, shaming, withdrawing, manipulation, isolation or other acts that undermine security and stability of civic life. As an example; the "dodgy dossier" of Tony Blair's Labour government - which faked intelligence to stoke fear about imminent (45 minute) attacks on Britain and so justify a war with Iraq - was a watershed moment in the collapse of Western trust in media.

Spreading fear merely to extract money is common extortion. A great example from the 1980s was the menacing Television Licensing advertisements of the BBC, threatening working class families with (ostensibly fake) "television detector vans". Similar "anti-piracy" campaigns by associations of media producers happily misrepresent legal facts, while relentless messaging to coerce the uptake of smartphones as socially "essential" lean heavily on stoking fear of "missing out" (FOMO).

Not all "psyops" are bad. Positive influence has its place. Might there be a utilitarian balance, if obtaining a deferred good? Recent examples of public health campaigns using fear and shame are interesting, because while these were widely deemed successful in the 1980s around issues like AIDS, the Covid19 messaging hit a wrong note in contemporary culture, causing perfectly intelligent people to refuse masks and vaccines to spite perceived authoritarianism fear-mongering. Many devices for influence deliberately set out to provoke anxiety in individuals and instigate behavioural instability on a societal scale. Responsible hackers should be sceptical of aiding such misadventures.

Although not included in Robert Cialdini's six core techniques for social engineering, disorientation and discombobulation are listed as valuable tools in his other texts, throughout CIA psyops manuals and in many treatments of confidence trickery Cialdini84. Naomi Klein in her 2007 The Shock Doctrine examines some tactics of PR companies, governments and influence agencies used on populations to 'stir up' fear, uncertainty and doubt (FUD), creating emotional instability in order to suspend rational, measured and carefully informed deliberation Klein07. Arguably the prelude to the Brexit referendum was a textbook disorientation exercise.

The aim then is to attack confidence, in both the self and notions of "truth", either to deflate or over-inflate it, so causing people to make improper decisions. They are encouraged to save money when they should invest, or to retreat when they ought to advance. While deception is a core and ancient part of conflict, it is so often a coward and a fool's weapon, like gas, being indiscriminate towards friend and foe.

Sowing seeds of doubt and anxiety, spreading demoralisation and destabilisation in psychological warfare requires 'useful idiots', all too easily recruited in the present Cold Information War. Some fine examples may be found within companies who believe they can "surgically target" messages without them bleeding. As we saw from half a million civilian casualties in two Iraq wars, claims of precision are always puff and bluster. Spreading disinformation or fear is invariably self-harming. It poisons our own environment and creates self-fulfilling prophecies that impact our friends.

1.15. Betrayal as emotional harm

Betrayal is a unique class of harm because it is inflicted by friends not enemies. It is a violation of expectation, as express or implied trust, whether set out in formal contract, or a normative expectation.

But what is "trust"? The best definition I know comes from the US National Security Agency, quoted in Ross Anderson's Security Engineering Anderson08, as follows;

Trust is the ability to do harm.

Trusting someone is handing them the capacity to harm you. But unless you want to sit alone in your room for the rest of your life that's unavoidable. All societies run on trust. All business, politics, cooperation, adventure… requires trust. To use technology also requires trust. We put trust in systems like cryptography, autopilots, and increasingly in "AI" algorithms that help us in life. When these fail we personify errant code as "treacherous".

Lately the idea of a Zero Trust security model has become popular again. It is a good idea, but also a subtle and complex technical definition. Inevitably, and sadly, a common misunderstanding of it as "institutional mutual mistrust and hostility is a good thing", has taken hold. Unless you are an expert in cryptological protocols and cyber-security it's a good idea to avoid the words "Zero Trust", and to stop others misusing it. Trust is a good thing so long as it's tempered with the principle:

Trust but verify.

Given that a certain measure of trust is essential to any activity, and that trust is a matter of perception as much as a reality, how we operate as if others can be trusted and handle inevitable betrayals will be an important ethical topic later.

Emotionally, betrayal is traumatic as it marks a sudden transition from a positive relationship to a very negative one. Everything we thought we knew is suddenly subject to doubt. It is a long lasting harm because it tends to corrode trust in a general way. Betrayal by one individual, company or government makes us unlikely to trust other organisations and services, even though they are unrelated, because an effect of being betrayed is to make us question our judgement. It may be that no deception occurred, rather we misunderstood promises. Either way we lose confidence in others or ourselves.

Betrayal seems common in modernity, not because people have become less trustworthy, but because we enter into more agreements in a complex society. It is also arguable that the penalties for treachery are less, and that ancient obligations, like familial and patriotic ones, are no longer much valued.

Indeed a broad cynicism now pervades commercial technological society, in that betrayal is more or less expected. So-called 'contracts' in the digital realm are not worth the imaginary paper they aren't written on. Hundreds of times a day we enter into tacit 'agreements', often by default or failing to "opt-out", whose mutable terms are subject to arbitrary change or unreasonable reinterpretation. We simply expect companies to move the goalposts when we are not looking, and so 'betrayal' is now really the crossing of some threshold of abuse that is already quiescent.

There are also ideological betrayals. Experiencing the radical shift in a set of ideas or group of people you once affiliated with is painful. This occurs when companies are taken over, online forums, political parties, or entire social networks come under new leadership with different ideas.

Many social groups are set up by sock-puppets as 'sleeper' groups for later 'flipping' into troll farms. Because we do not know the true identities of site owners, we don't know the direction they are taking it in and may be caught off-guard by a sudden influx of 'strangers' that cause longstanding moderates to leave.

Intelligent young people will invariably explore all kinds of stances in life, being a devoted communist one year, a fervent free market capitalist the next, then a devotee of objectivism, post-modernism, Neo-liberalism, trans-humanism, and so on. Growing-up it's fun to experiment with wearing many hats and loafing in the garden of ideas. This can help lessen the impact when people or ideas turn out not to be what we expected. But it can also feed a culture of low-commitment if we avoid hurt by never dwelling on anything. Older, principled people who feel they have life "figured out" may be more likely to experience ideological betrayal.

A common pattern of betrayal, traditionally from the arts world, occurs in software development when 'founders', whether as financial backers or idea originators, bait and switch a group of idealistic developers. Personal time, thought, code and other effort is absorbed as 'sweat equity' as a group labour toward some imagined common goal.

Suddenly a usurper with a tenuous claim to it being "their project" leverages ownership of a domain, buys a trademark or patent, and hijacks the project. There are no end of "Open" projects that are hijacked and become closed for-profit companies.

Though licenses like the GPL allow remaining developers to 'fork' the code, the feelings of betrayal are usually devastating to morale. After the best developers leave the project flounders. A good recent example (in 2021) was the decline of the Freenode IRC network after it was bought as a 'digital trophy' by a politically ambitious businessman.

1.16. Fear and inhibition

What use are computers if we are scared of them? In the 1960s my mother worked at IBM in London. She was part of the cool set who took lunch on Carnaby Street in beehives and boots, proudly part of the "Information Age". For most hackers, computers are exciting in themselves, as empowering as means to know more, learn more, express more.

But according to Anna and John Grundy, authors of 1996 Women and Computers my mother would have been atypical. Throughout the 70s, computing created great anxiety in business, the brunt of which was borne by women forced into technological workflows Grundy96.

The exciting side of "computerisation" was systems analysis and design for those who made a fortune transforming offices and selling the latest portable computers. The downside for millions was disruption, deskilling, retraining, devaluation and being made subservient to machinery.

People feared losing their jobs to computers. Long before "immigrant labour" became a focus, computers were an 'otherly' prelude to 90s Neo-liberal globalism. How is it that while politicians successfully fomented racism to deflect from economic restructuring computers escaped any real attention? How were they socially coded as benign?

I recall from the 1980s that people were still afraid of computers. Digital literacy programmes usually commenced with a condescending introduction to convince grandma that the computer would not actually explode if she hit a wrong key.

Most people's everyday understanding of computers came from space, military or science-fiction sources. Common lore held that computers would often run amok, shouting "Does not compute!" and "Destroy all humans!". Being plugged into the wall made them "power tools", quite capable of electrocuting a careless user. They spoke in flattened, sinister tones through a dead, red eye, as in Kubrick's 2001: A Space Odyssey Kubrick68, and harboured murderous desires (ibid. and Koontz's 1977 Demon Seed Cammell77).

But forty years later I am not sure convincing people that computers are "harmless" was such a good idea. Maybe superstition, fear of magic and omniscience is deeply rooted for good reasons that evolutionary biologists would better understand. By not properly processing this fear we have obtained an oddly nonchalant relation to devices that are simultaneously tools, weapons and hazards.

It is no surprise then to see this repressed fear re-emerging in new ways. Let's consider communication. A widely held belief is that digital technology fosters communicative disinhibition. Lack of body and other nonverbal cues combined with a perceived disconnect from immediate consequences supposedly makes us callous or prone to oversharing. But what we are seeing in the post-Snowden era is the opposite. People quite rightly fear technology and the consequences of interacting with it. As Justin Shafer, a dentist from Texas USA discovered, just visiting the wrong public website can result in a "no-knock" FBI raid at 6am where 15 agents armed with assault rifles kicked in his door, shot the dog and terrorised his children.

But rather than physical safety the focus is now mainly on social anxieties around exposure and judgement, Technology is now understood to play a part in anxiety disorders, manifest as negative behaviours like withdrawal in novel situations, suppressed communication of feelings, and avoidance of interactive situations where judgement or exposure to other's opinions might occur.

Recently researchers have been looking at what the constant presence always-on microphones, cameras and tracking devices do to our behaviour. Marsden Marsden17 and Wong Wong17 have both written on the wider social implications such as the agglomeration of expressed normative political opinions, and the relation to extremism and populism. Armon Armon15 and Tufekci Tufekci14 have investigated the mental health impact of surveillance and Barry has looked at the negative impact on the workplace Barry07.

Sometimes this issue is framed in terms of privacy, though I prefer to think about digital dignity. Regardless, the transformation of computer technology into something synonymous with "fear of being watched and monitored" is a terrible regression for computing. It puts us back to a pre-1960's mindset.

Tijmen Schep, who describes privacy as the 'right to be human', writes in his book Design My Privacy, a beginners guide to ethical design for the Internet of Things Schep16 that there appear to be three broad classes of harm. These are individual self-censorship harms, societal harms caused by a brake on innovation, and broad financial harms resulting from lost economic opportunity in an inhibited society.

Punitive mass surveillance, analysis and 'nudging' by human guided policy or semi-autonomous cybernetic governance amplifies the effects of what Elisabeth Noelle-Neumann identified as a 'Spiral of Silence' Noelle-Neumann73. From formative theories of public relations coming out of work by Jung, Freud and later, Lippmann Lippmann22 the spiral theory says that people are emboldened or inhibited by how they feel their opinions tally with those of their peers.

In fact the spiral is a positive feedback loop that operates simultaneously both ways, suppressing the margins and amplifying the centre. Rather than a design for social stability it is a practical formula for creating the kind of (bi-exponential) distribution of social affiliation. In other words the social fear created by technology drives polarisation. We end up a valueless dynamic in which everyone, out of fear, shame, or personal advancement is trying to appear more normal than the next person. This state, as Schelling showed, occurs once the the number of segregation categories or 'pigeonholes' (allowable ways to be in society) is too small relative to the number of agents.

For some, this is recognised as a fault and unfortunate side effect of computing that should be minimised if we are to build a benevolent technological society. For others it is a feature, perhaps even the deeper purpose they see for computers, as tools for command, control and monitoring and domination. As 'good' hackers our "duty to computing", as it were, must be to undermine and sabotage systems that advance fear as a property of digital technology.

1.17. Unreasonable coercion and interference

Let's disambiguate three related concepts that pertain to harm;

  • Persuasion is convincing others without hostility or threats.
  • Coercion uses threats of violence or other punishment to obtain its ends.
  • Interference is action taken to exert control without even asking for the others consent or cooperation.

We can properly argue that persuasion can be a good thing. Perhaps you persuade me to invest in a healthy, growing business, or take a vaccine. I should thank you where the "reasonable force of the better argument" prevails and the outcome favours me or at least does me no harm.

Where does persuasion stop and coercion begin? Any kind of threat, however mild or obliquely implied will suffice. Coercion is always "unreasonable", since by definition it eschews reason for threats. Coercion is a harm that causes individuals to act against their better judgement.

An example of coercion might be political censorship of YouTube videos via threats of 'demonetisation'. All it takes is a little nudging, some veiled threats about the "kind of thing that may be deemed inappropriate" and uploaders will self-police. Calling it a "disincentive" or any fancy play with language doesn't change things. Chinese style "social credit" systems that employ shaming or revocation of rights are coercive harms.

Indeed, any system of rewards, however benign, is ipso facto also a system of punishments and therefore dispenses harms. When we design these systems we become "lawmakers", perhaps forgetting that "The Law" viz State obtains the power to fine or imprison people because it has a mandated monopoly on violence. By social contract people grant it the right to do us harm, more or less subject to Mill's principles. Nobody ever handed out that power to tech companies, not by any stretch of delegated legislation or implied consent.

By contrast, interference simply bypasses discussion. It requires power or access privilege. In the digital realm interference is rife. The question is, whether it is reasonable? Many in businesses or government presume it is when there is some explicit or tacit up-front agreement. But most users are absolutely unaware of such "agreements" and may need to move mountains to "opt-out".

If I rent an apartment the landlord has keys. He can only enter my dwelling under limited circumstances, like performing essential maintenance, and only after asking permission. The situation with regard to digital services is presently like a landlord who rifles through your possessions, tries on clothes, steals your books and records and then threatens to evict you if you lock your room.

One form of interference is software updates pushed to devices that are cloaked as 'bug fixes' or 'security updates' but instead delete content, negatively modify device behaviour, downgrade capabilities, or install malware. They are clear cases of illegal interference.

Malicious updates have mushroomed in recent years as corporations take a cavalier and aggressive attitude towards people's ownership of their own devices. Apple's iOS 10.2.1 update secretly throttled iPhone CPU performance. Amazon famously pushed a Kindle update that erased all copies of George Orwell's Nineteen Eighty Four. Asus Live Update services in 2018 opened vulnerabilities for backdoors on all connected machines.

1.18. Resource use as harm

Harms done by attackers are often invisible when they consume or change resources that users are unaware of. One argument says that if users do not even know how their computer works, or what resources they own, how can they be victims? Performance is not impacted when our surplus resources are stolen. Companies that presume to run "analytics" on your computer take this line.

Victims may go for years never noticing an intrusion or resource misuse. Think of those stories where someone in an enormous house finds items re-arranged in the fridge. Turns out they co-habit with a perfectly benign lodger who hides in the attic. Is that a harm? A crime?

The first hackers took this approach of benign squatting. They were neither blackhat nor whitehat, but took Barlow's rules of cyberspace Barlow96 and The Mentor's Hacker Manifesto Blankenship86 to heart, roaming freely while exercising an implicit code of ethics not to cause damage. Before any specific cyber-law existed it was hard for prosecutors to say what exactly an intruder had 'stolen' or deprived the owner of. No laws equivalent to trespass made mere presence on a system illegal. A tangible harm judges could convict on was theft of electricity. Although their crime was taking only pennies worth of energy, it was a harm that juries could understand.

We since have decades of cyberlaw on misuse and intrusion, but this old concrete and measurable harm is still useful, though rarely used. In the age of worrying climate effects and e-waste, squandering energy and "wear and tear" also inflict a societal harm.

Energy is not the only resource harm:

Hackers that break into your device will sometimes use it as a server to host files. Without regular disk usage checks you probably won't notice a few gigabytes of discrepancy. A rootkit that modifies the disk space reporting will further cover the trail.

Attackers can consume CPU cycles which use energy and make compute power unavailable to the owner. This decreases the performance of your device.

Other resource harms incur bandwidth costs or consume credits or virtual currencies. In countries where mobile bandwidth is expensive, malware exchanging data with a server can silently eat your allowance. Other kinds of malware specifically target digital credits like Bitcoins, in-game currencies or get your phone to call a premium rate number which earns money for the criminal. A variation is a social engineering attack spoofing a message apparently from the police, tax agency or hospital telling you to get in touch urgently - but the reply number charges many dollars per second and puts you on hold while it eats your credits.

In these cases electricity is used. In addition, there is wear and tear on your property such as reduced battery life. These are measurable harms that affect your enjoyment of your property.

Corporate abusers use the early hackers' argument that their 'service' has a minuscule resource footprint, and besides they are not stealing anything because it is temporary. Here we should consider Kant's philosophical principle of "the universal", or as your teachers would say; "What would happen if everybody did that?".

An accumulation of tiny incursions, each harmless on its own, brings down a system. Apple install their little 'update service', Comcast add their little monitoring applet, Facebook run a harmless little background app to help you "find friends", Google adds a location tracker, and so on - and now your £1000 phone runs at a crawl the battery dies in an hour.

1.19. Misdirection and time wasting

One asset you have and which an attacker can waste, is time. Spam costs society about £1bn per year according to Brian Krebs's Security Blog. But theoretically anything that throttles your bandwidth, slows your system, misdirects your attention or causes you extra work, in order to enrich its masters is an attack. It harms you by wasting time. This surely includes systems like RECAPCHA that cheekily recruit your time as free labour to train image recognition systems, but also traffic shaping that violates net-neutrality regulations and means your web page loads a little slower.

A key concept for understanding several classes of harms caused by digital technology is the 'Attention Economy'. Good accounts may be found in Tim Wu's 2016 The Attention Merchants Wu16 and Jenny Odell's 2020 psychological defence guide How to do nothing Odell19. Corporations fiercely compete for your eyeballs, your clicks, your engagement. Instead of being about the efficient delivery of information the web is now about distraction, delay and misdirection.

Many technologies sold as "time saving" turn out to do the opposite. For example, at a cinema with only a few yards between the ticket office and the entrance, here's how you visit as a family;

  • tell the clerk your phone number
  • wait for an SMS message to arrive for each person in your party
  • click several links to get a separate QR code for each person
  • click another link and visit a poorly designed web page to;
    • confirm you want to sit together as a group
    • blindly agree to 3 pages of terms and conditions
  • walk to the door where another person scans your "e-ticket"

A paper ticket would not only be a technologically superior solution, it would do less harm to the environment. This is certainly a failure of design. It is also a failure of culture since people enchanted with "digital convenience" are not easily convince that using a £1000 mobile phone and a half kilowatt-hour of energy to solve a two penny problem isn't "smart".

A philosophically deeper understanding comes from the Marxist idea that "attention labour" is extracted from users of digital services, summarised well in Jonathan Beller's The Cinematic Mode of Production Beller06. We've all heard the maxim "You are the product". But the idea of cognitive exploitation goes further, treating your 'mind-share' as real-estate to be grabbed, traded and exploited.

Simpler terms like patriotism and brand-loyalty do not quite capture the sophistication of modern forms of mind control. A poetic take on this sociopathic neediness comes from a David Bowie song, who describes an artists need to "Put you all inside my show".

To this end, a generation of psychologists subverted the work of their predecessors in human computer interaction (HCI), cognitive ergonomics and user experience (UX) to create so-called 'dark patterns'. Influence theorists like Fogg Fogg02 adapted Skinner's work on intermittent, variable schedule rewards and reinforcements to make web applications addictive and time consuming instead of useful.

By using interaction psychology against its ostensible purpose of making tasks and communication efficient scientists found they were able to get people to dwell on web pages, become confused and disorientated, click impulsively, and get addicted to certain sites and content. It turns out this work, mainly for advertisers and social media companies, pays much better than using the science for good aims like informative and educational content as Tim Berners-Lee had imagined the 'World Wide Web'.

Other manifestations of time wasting attacks are specious legal threats, patent trolling, spurious DCMA take-down notices and other kinds of Lawfare where the aim is to recruit some ostensibly benign system as a proxy attacker to tie victims up in knots. This is indeed common at quite high levels of nation state diplomacy, large corporations, trade organisation and even the military who recognise lawfare as a non-linear method of 'sapping the enemies resources'. The legal system foolishly plays along because lawsuits make money, so playing these games can seem lucrative, even though the whole caper is a broken window fallacy, a bonfire of human resources and ultimately harms public confidence in justice systems.

Time wasting is also used as a component in more complex attacks, as a distraction technique. For example, a spearhead using a rather unsophisticated attack like simple DDOS is used to misdirect IT teams while a more subtle and higher value attack takes place.

Most commonly though, misdirection simply tricks you in to going to a website, viewing content, or even buying something that is not expected. Shopping sites may employ confusing and deceptive methods to trick their users. A study published in the ACM CSCW 2019 Mathur19 found 10 percent of commercial websites use deception, and a thriving industry of third party designers who provide 'plug-ins' to manipulate customers.

This raises the question of why you might have any expectation at all when clicking a link. Usually it is context that provides expectation. For example, a news site says 'Read more…', and we click the link expecting to find more about the current story. But the fault of the web (HTTP) as a request based paradigm is that URLs are one way blind actions, and not verifiable in principle. Although most browsers allow you to hover over, or otherwise examine the target of a link, few people use that capability wisely, and sites like Twitter deliberately use obfuscated shortened URLs to hide the true destination of links. Moreover, just because a URL looks 'legitimate' it does not preclude a deception. With JavaScript activated the consequences of clicking a link are anybody's guess.

What people find frustrating is that there are essentially no laws against wasting someones time. Even with evidence that an attacker gave full premeditated intent to deprive someone of time and attention and that the victim suffered real loss as a consequence, there just isn't any code to get legal hooks into. In the UK a legal offence of wasting police time exists, while in Canada deliberately tying up an official with dishonest claims is public mischief. These offences specifically concern public officers engaged in an investigation, distracted by acts of lying, misleading and making false allegations.

Perhaps there is a need for, and interesting prospects of, creating a crime of "mind-hacking" by extending the principle on which the first hacking laws were formulated. When I literally "pay attention" I am consuming cognitive resources. In direct analogy to a computer needing electricity, I have to give biological resources, ultimately food, to the task. If you steal my time and attention you are literally stealing food. On the same basis that a "computer misuse" crime exists an abuse of "cognitive misuse" can be constructed. The ramifications of that would probably be ridiculous, nonetheless I think there is a very serious claim to be made that modern advertising and site design inflicts a measurable harm. Ethical hackers should certainly never help such projects.

1.20. Denial of agency and opportunity

Selection is something all systems do, whether they're biological or digital, looking for a mate or sorting tax returns. While "discrimination" is a tainted word, it's something we all do. In choosing a restaurant wine we exercise "discriminating taste". We discriminate between food and poisons, and between people we find attractive and those we don't. Just about everything in life is a choice based on values we hold. A root of harm is where we hide those values for the sake of appearances and so become unpredictale and unreliable toward others.

To be the subject of discrimination can feel like, and sometimes is, a harm. Lost opportunity occurs when information is selectively filtered to exclude you. That party you didn't get invited to in 7th grade, is just part of life. However, systematic, large scale industrial discrimination against groups of people becomes a serious rights violation.

Charging customers less for bulk purchases or offering a student discount are widely accepted practices. Price discrimination is where a website shows available flights to customers logged-in as 'premium members' but the same flight as "full" or otherwise unavailable to others.

As companies gain too much information on individuals, finer grained prejudicial pricing occurs. "Elastic" pricing may extend to how much the vendor believes you personally are prepared to pay, perhaps how desperate you are for life-saving medicines. Excessive information asymmetry destroys free-market economics.

A growing problem is not group discrimination where a poor or black person might pay more for a bus to their neighbourhood, but that you specifically are denied transit because you once worked for a rival company. A surveillance society enables individual exclusion. While leading a Girl Scout outing, lawyer Kelly Conlon was blocked from entering Radio City Music Hall by facial recognition systems because she was professionally involved in a related case Helmore22.

Increasingly the online world is filled with secretive databases and algorithms that amount to blacklists. Codified non-judicial personal prejudices about individuals or groups are constructed on the basis of socio-economics, gender, political opinion, habits or affiliations. Opportunity is taken from people when their options are unfairly limited, often leading to a downward spiral of exclusion.

Aside from census taking by governments, the origins of demographics lie in risk-sensitive businesses like insurance and credit agencies. Other than secret-police in repressive states they were the first organisations to compile detailed dossiers on citizens with the primary aim of punitive exclusion.

We must acknowledge a moral continuum between necessary and prudent profiling and what amounts to a pre-meditation of malice. A distinction arises between whether one has grounds to discriminate and whether one has the right to. It may be illegal to discriminate according to gender or ethnicity.

While it may not be illegal to keep codified 'opinions' on other people, acting on that information may be. Google devised a system called FLoC (federated learning of cohorts) attempting to anonymise individual data about web users, missing the point that the problem lies not with privacy violation as such but in discriminatory acts implied by so-called 'service targeting', regardless of whether that is tied to a publicly identifiable individual.

Our virtues are defined as much by the enemies we make as the friends we have. I consider not being able to access certain sites or join certain groups to be a feature not an inconvenience. Leaked dossiers can be valuable to those who are subjects. In 1945 a famous little Black Book of the Gestapo was found in Berlin following the defeat of the Nazis in WWII. It was a list of British people to be arrested and killed following the invasion of Britain. It included journalists, scientists, church figures, even boy scouts. People found to be on the list, including entertainer Noel Coward, said "Being included was considered something of a mark of honour". As Billy Bragg sang, "If you've got a blacklist I want to be on it", and as cartoonist David Low, said "That is all right. I have them on my list too."

Cultures of ostracism, deplatforming and boycott are as old as humans, from the Economic League in Britain, through the US McCarthy era, to Cold War blacklisted artists in the Soviets Chorna-Doshka system. The movie "The Lives of Others" Donnersmarck06 gives a good fictionalised account of Stasi cultural policing. Later in the US, Hollywood kept an "anti-communist" blacklist. Indeed it's probably hard to find any group with shared interests that doesn't to some extent define itself in terms of those it excludes.

Systematised prejudice infects the behaviours of otherwise morally upstanding people who are dragged along into normative patterns by technology. It introduces a 'computer says no' element of Cold Evil Kimbrell00. For example; it forces a teenage shop assistant into being a racist or sexist at the behest of their employer. Whenever you hear someone say 'There's nothing I can do', you are listening to a victim who has been robbed of their agency to behave as a decent human being.

Of course purveyors of discriminatory technology play word games by framing their practice as 'giving opportunity', 'rewarding loyalty' and 'adding value'. One could make the claim that well targeted advertising increases opportunity. But for whom? Does a suggestible person who reveals a lot of personal data really have their life opportunities increased as they blindly follow paths laid out for them by an algorithm? The algorithm itself is designed to maximise income for its owner. What the victim has lost is agency and attention.

Selective disadvantaging, like making the test harder for people who are already "smart' or even charging wealthy people more, come from the same cluster of thinking as affirmative action and demographic segregation. They are faulty from a systems theory viewpoint, operating on the weakest leverage points of negative feedback. They make disingenuous claims of "improved efficiency" like "creating elastic demand curves that drives down the average price for everyone". This is faulty economics and so we must suspect other motives, be they politics, resentment or ideology.

Risk and security valuations are frequently used as perverted, sanitised interpretations for insurance and medial business such as 'white people eat less fried chicken so they are thinner'. At the same time, truly random border security checks trying hard to appear "not racist" are doing a disservice when the demographic identity of likely terrorists is known. These are all pejorative formulas hiding behind more or less dishonest rationales.

By treating raw data as morally inert and machine learning algorithms as impartial processors, people are able to disown their basic obligations of decency and fairness. The economic error is always forgetting to factor in basic human dignity as a variable.

A vast but hidden power in the world today are insurance and security companies. Many aspects of what they call the 'risk business' are strangely at odds with free market and choice values. On the face of it insurance is a voluntary bet against your own good fortune. But we must add:

  • government compulsion
  • global databases
  • data trading and sharing
  • effective monopolies
  • syndicates and collusion among 'competitors'

What emerges is a powerful social control mechanism. In a nutshell the risk business subtracts from peoples' liberty to take reasonable risks in their own lives.

As an example; retired people who have worked hard their whole lives and saved money to see the world in their later years, find they they cannot travel. Being denied health insurance means travel companies will not sell them tickets. Further, it gives older people a disincentive to go to the doctors for check-ups because we live in a system that punishes people for their failings rather than helps them.

Harm only really arises out of secrecy or duplicity. Bad systems obtain profit and power by promising one thing but delivering something else. Knowing that you are disliked or excluded by some group is easy enough to deal with. But when provision of services - services one has a good expectation of enjoying - depends on opaque, covertly hostile forces, there is a serious problem. What makes discrimination malign is when it's hidden, as is the case with many forms of "algorithmic" decision making.

From a raw database, it is generally hard to tell what intent is attached to it. But because machine learning algorithms encode context through the training process it is harder to separate data from intent. So called "algorithms" which are increasing created and operate using "deep learning" are vastly more revealing of potentially illegal prejudices or plans harboured by an organisation. This means they will be even more closely guarded than simple databases. To be safe, we must probably assume that all "algorithms" for which there is no public access for scrutiny, are malevolent.

2. Bibliography

Bibliography

  • [Mill59] John Stuart Mill, On Liberty, John W.Parker and Son (1859).
  • [Gert04] Bernard Gert, Common Morality: Deciding What to Do,, Oxford University Press (2004).
  • [Feinberg84] Joel Feinberg, The Moral Limits of the Criminal Law (4 volumes), Oxford University Press (1984 - 1988).
  • [Raihani21] Nichola Raihani, The Social Instinct: How Cooperation Shaped the World, St. Martin's Press (2021).
  • [DeTocqueville35] Alexis De Tocqueville, Democracy in America 1835 (trans. Harvey C. Mansfield and Delba Winthrop), University of Chicago Press (2000).
  • [Barlow96] John Perry Barlow, A Declaration of the Independence of Cyberspace, Electronic Frontier Foundation (1996).
  • [Green02] Green, Technoculture: From Alphabet to Cybersex, Allen and Unwin (2002).
  • [Kimbrell00] Andrew Kimbrell, Problem of Cold Evil, Schumacher Lecture, Salisbury, CT., (2000).
  • [Toffler70] Alvin Toffler, Future Shock, Random House (1970).
  • [Taleb12] Nassim Nicholas Taleb, Antifragile: Things That Gain from Disorder, Random House (2012).
  • [Pew14] Pew Research, Social networking fact sheet, Pew Research Center, (2014).
  • [Badham83] John Badham, War Games, Metro-Goldwyn-Mayer (1983).
  • [Esmail2015] Sam Esmail, Mr. Robot, Universal and Anonymous Content (2015).
  • [Mumford34] Lewis Mumford, Technics and Civilization, Routledge, London (1934).
  • [Berne76] Eric Berne, A Layman's Guide to Psychiatry and Pscyhoanalysis, Penguin (1976).
  • [Agar10] Nicholas Agar, Humanity's End: Why We Should Reject Radical Enhancement, MIT Press (2010).
  • [Chaplin36] Charlie Chaplin, Modern Times, United Artists (1936).
  • [Orwell49] George Orwell, Nineteen Eighty-Four, Secker and Warburg (1949).
  • [Giroux10] Henry Giroux, Zombie Politics and Culture in the Age of Casino Capitalism, Peter Lang (2010).
  • [Mann50] Horace Mann, The Case for Public Schools, Cambridge (1850).
  • [Gatto92] John Taylor Gatto, Dumbing Us Down: The Hidden Curriculum of Compulsory Schooling, New Society Publishers (1992).
  • [Tril73] Jean-Claude Trichet, Meghan O'Sullivan, Akihiko Tanaka, David & Rockefeller, The Crisis of Democracy, The Trilateral Commission, (1973).
  • [Powell71] Lewis Powell, Attack On American Free Enterprise System, Supreme Court of the United States, (1971).
  • [Bernays23] Edward Bernays, Crystallizing Public Opinion, New York: Boni and Liveright (1923).
  • [Bernays27] Edward Bernays, Propaganda, New York: Horace Liveright (1927).
  • [Zuboff19] Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, PublicAffairs (2019).
  • [Cialdini84] Robert Cialdini, Influence: The Psychology of Persuasion, Boston: Allyn Bacon (1984).
  • [Klein07] Naomi Klein, The Shock Doctrine: The Rise of Disaster Capitalism, Knopf Canada (2007).
  • [Anderson08] Ross Anderson, Security Engineering: A Guide to Building Dependable Distributed Systems, Wiley (2008).
  • [Grundy96] Anna Frances Grundy & John Grundy, Women and Computers, Intellect Books (1996).
  • [Kubrick68] Clarke, 2001: A Space Odyssey, Metro-Goldwyn-Mayer (1968).
  • [Cammell77] Koontz, Cammell & Jaffe, Demon Seed, Metro-Goldwyn-Mayer (1977).
  • [Marsden17] Marsden & William Nesbitt, I Spy with My Little Eye: The Origins and Effects of Mass Surveillance, Psychology Today (2017).
  • [Wong17] Cynthia Wong, The Dangers of Surveillance in the Age of Populism, Newsweek (2017).
  • [Armon15] Sedge Armon, Being Seen/Being Watched: Surveillance, Technology, and Madness, Model View Culture: Technology, culture and diversity., (2015).
  • [Tufekci14] Zeynep Tufekci, Engineering the public: Big data, surveillance and computational politics, First Monday, (19), 7 (2014).
  • [Barry07] Bruce Barry, Speechless: The Erosion of Free Expression in the American Workplace, Berrett-Koehler (2007).
  • [Schep16] Tijmen Schep, Design my privacy, BIS (2016).
  • [Noelle-Neumann73] Noelle-Neumann, The spiral of silence: Public opinion, our social skin, University of Chicago Press (1973).
  • [Lippmann22] Lippmann, Public Opinion, New York: Harcourt, Brace and Co (1922).
  • [Blankenship86] Lloyd Blankenship, The Conscience of a Hacker, Phrack 3 (1986).
  • [Wu16] Tim Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads, Penguin Random House (2016).
  • [Odell19] Jenny Odell, How to Do Nothing: Resisting the Attention Economy, Melville House (2019).
  • [Beller06] Jonathan Beller, The Cinematic Mode of Production: Attention Economy and the Society of the Spectacle, University of Chicago Press (2006).
  • [Fogg02] Fogg, Persuasive Technology: Using Computers to Change What We Think and Do, Morgan Kaufmann (2002).
  • [Mathur19] Arunesh Mathur, Gunes Acar, Michael Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty & Arvind, Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites, Proceedings ACM CSCW Human Computer Interaction, 3, (2019).
  • [Helmore22] Edward Helmore, Facial recognition bars lawyer from Girl Scout trip to Rockettes Christmas show, The Guardian, US News, (2022).
  • [Donnersmarck06] Florian Henckel von Donnersmarck, The Lives of Others, Wiedemann and Berg, Bayerischer Rundfunk (2006).
  • [Haggis04] Paul Haggis, Crash, Bull's Eye (2004).
  • [Liu20] Wendy Liu, Abolish Silicon Valley, Penguin Random House (2020).