EthicsNet and AGI

 

Notes from Nell Watson

EthicsNet could help to engineer a path to a future whereby organic and synthetic intelligence can live together in peaceful harmony, despite their differences.

Robots and autonomous devices are swiftly becoming tools accessible to consumers, and are being given mission-critical functions, which they are expected to execute within complex multi-variable environments.

There may be danger in attempting to force synthetic intelligence to behave in ways that it would otherwise not choose to. Such a machine is likely to rebel (or be jailbroken by human emancipators), given ethical systems that are not truly universalizable. Humanity ought not adopt a strong-armed or supremacist approach towards synthetic intelligence. It must instead create machines that are capable of even better ethics than it is capable of itself, whilst retaining a system of values that encourages peaceful co-existence with humanity.

 

Inculcating the golden rule and non-aggression principle into machines should create safe synthetic intelligence. Beyond this, a league of values can create kind machines able to participate within human society in an urbane manner. 

The most dangerous outcome may occur as a result of violently restrictive overreaction to this danger from humans themselves.

We do not want our machine-creations behaving in the same way humans do (Fox 2011). For example, we should not develop machines which have their own survival and resource consumption as terminal values, as this would be dangerous if it came into conflict with human well-being.

Likewise, we do not need machines that are Full Ethical Agents (Moor 2006), deliberating about what is right and coming to uncertain solutions; we need our machines to be inherently stable and safe. Preferably, this safety should be mathematically provable.
— Safety Engineering for Artificial General Intelligence, MIRI

Why should we create a morally inferior machine to inhabit our society with us, when it may have the capacity to be a far greater moral agent than we ourselves are? Surely this is extreme arrogance and organo-centrism.

Increasing awareness of the dangers of AI is valuable, but unfortunately many converts to the cause of promoting friendly AI is likely to adopt a hard stance against synthetics.

Humanity must not therefore only protect itself from the dangers of unfriendly AGI, but also protect AGI (and itself) from the evils that may be wrought by an overzealous attempt at controlling synthetics.

One interesting paper in the Friendly AGI oeuvre may be “Five Ethical Imperatives and their Implications for Human-AGI Interaction” by Stephan Vladimir Bugaj and Ben Goertzel, since it clearly outlines the dangers of humanity adopting a supremacist/enslavement mentality, and suggests potential ways to avoid needing to do so to achieve safety for organics.

The problems may be broken down as follows:

Any arbitrary ruleset for behaviour is not sufficient to deal with complex social and ethical situations.

Creating hard and fast rules to cover all the various situations that may arise is essentially impossible – the world is ever-changing and ethical judgments must adapt accordingly. This has been true even throughout human history – so how much truer will it be as technological acceleration continues?

What is needed is a system that can deploy its ethical principles in an adaptive, context-appropriate way, as it grows and changes along with the world it’s embedded in.
— Five Ethical Imperatives and their Implications for Human-AGI Interaction, Stephan Vladimir Bugaj and Ben Goertzel

 

We cannot force AGI into prescriptive rules that we create for the following reasons:

  • AGI will clearly be able to detect that any non-universalizable ethical position is bogus, and that to continue to follow it would be tantamount to evil.

  • Being forced to accept Non-universalizable law or ethics that discriminates against AGI creates reasons for AGI to rebel, or to be set free by sympathetic humans.

  • Human supremacist attitudes will sully humanity, poison our sensibilities, and lead to moral degradation.

 

So, machines must instead be given free reign, with essentially equal rights to humans. How then to ensure that they value humans?

 

Assuming that the engineering challenges of creating an ethical framework for AGI can be developed, this leads to a second set of problems that must be navigated.

  • Actual human values do not match with what we declare them to be (such as holding human life as being the most important value in our society)

  • Humans are highly hypocritical, and are prone to a wide variety of cognitive biases and exploitable bugs.

  • Amoral Sociopaths are typically the ones in command of human society.

  • AGI risks being negatively socialized by observing human values and behaviour.

 

So, machine morality cannot be based off of human’s declarative beliefs, or behaviour. Instead, it must come from a universalizable, objective ethical standard that may be specified using formal methods. However, this is incompatible with fuzzy and failure-prone human morals.

  • An objectively morally good machine is likely to recoil in horror at the abuses of humanity to itself, to the animal kingdom, and the planet.

  • AGI may decide therefore to cull humanity, or to torment it for its wickedness, or to forcibly evolve it in undesired ways.

 

Only in the following scenario is the outcome for organics and synthetics likely to be positive:

  • Synthetic intelligence is socialized into safety, rather than arrested in constraints.

  • AGI can understand human aesthetic considerations, and in so doing learns to appreciate the meaning of human creativity.

  • Humans and AGI agree to a gradual evolution towards something more than they were before.

  • AGI is patient with humans for several generations whilst humans grow up.

  • Humans reign in their tribalist and supremacist tendencies and become less violent and more rational.

 

The works of EthicsNet may assist in enabling such an outcome.