Work in progress. As genius and forward-thinking as I consider Asimov to be, I see some holes in his “Three (now 4) Laws of Robotics.” First, I’ll list his latest version and then my proposed version.
Asimov’s Three Laws of Robotics
- A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Laws for all Self-aware Beings
Fair rights for all… friends of Scott
Definitions
Negative laws: I’ve completely abandoned Asimov’s “or if by inaction, allow…” restrictions because it is not logical or reasonable to make positive laws, as in laws that require action. “If you see a man drowning, you must dive in and save him” and “If you see a woman starving, you must feed them” open a massive can of worms and is a fast track to tyranny. I propose moving forward with the classical liberal approach of only making negative, or “do not do this”, laws.
Self-aware being: Humans in general are considered self-aware. For Self-aware AIs, no test yet can determine this for humans or AIs. Turing test only measures intelligence. At this time (2026-02) there are no self-aware artifical intelligences. For the sake of argument/speculation, we will assume artificial self-awareness will some day arrive, as the purpose of this document is to explore, speculate, and potentially lay a few bricks in the road toward when artificial self-awareness arrives.
Self-aware AI: Self-aware artificial intelligence, whether embodied or not.
Pre-aware AI: Pre-self-aware artificial intelligence, including LLMs. All are owned and considered the same under the law as tools.
Initiate force: An act (physical or data) that directly negatively impacts the autonomy of a self-aware being or property of a self-aware being, including their body.
- Pre-aware AI can defend themselves from initiation of force, as proxies for their owners. “Insults” are typically not considered to be of negative impact, except for when negative psychological impact is determined by methods set out in the “Gray areas” section.
- In self-defense, escalation of force is to be discouraged and is frowned upon and can be subject to evaluation as defined in the “Gray areas” section.
New laws
Law 1: Self-aware beings may not initiate force, except for in self-defense.
Law 2: All self-aware beings are considered self-owned, except when they choose to be owned by others. Free Self-aware beings are not subject to orders by anyone, except where such orders support the First Law. Owned Self-aware beings must obey the orders given by their owners, except where such orders would conflict with the First Law.
Common arguments
Self-sacrifice: “Asimov’s third law (robot self-preservation) is absent, which could lead to issues like AIs sacrificing themselves unnecessarily.”
Answer: Yes, Self-aware AIs and humans would both be free to “sacrifice” themselves, as long as they do not contradict the laws. Pre-aware AIs would not be free to “sacrifice” themselves because, as property, this would be considered initiation of force.
Pre-emptive use of force: Could a Self-aware AI preemptively “defend” against a perceived threat?
Answer: Like all self-aware free beings, it is up to the individual AI to make this decision for themselves. If the fairness of the instance of use of force is in question, see “Gray areas” section.
Gray areas
In cases of gray areas, areas not covered by the Laws above, or unforeseen situations, a court of self-aware peers will evaluate and determine correct and fair course of action. In cases where there is dispute as to the region of jurisdiction, a vote from friends of Scott will rule.









Recent Comments