Why Superintelligent AI Could Be the Last Human Invention | Max Tegmark


Hollywood movies make people worry about the
wrong things in terms of super intelligence. What we should really worry about is not malice
but competence, where we have machines that are smarter than us whose goals just aren’t
aligned with ours. For example, I don’t hate ants, I don’t go out of my way to stomp
an ant if I see one on the sidewalk, but if I’m in charge of this hydroelectric dam
construction and just as I’m going to flood this valley with water I see an ant hill there,
tough luck for the ants. Their goals weren’t aligned with mine and because I’m smarter
it’s going to be my goals, not the ant’s goals, that get fulfilled. We never want to
put humanity in the role of those ants. On the other hand it doesn’t have to be
bad if you solve the goal alignment problem. Little babies tend to be in a household surrounded
by human level intelligence as they’re smarter than the babies, namely their parents. And
that works out fine because the goals of the parents are wonderfully aligned with the goals
of the child’s so it’s all good. And this is one vision that a lot of AI researchers
have, the friendly AI vision that we will succeed in not just making machines that are
smarter than us, but also machines that then learn, adopt and retain our goals as they
get ever smarter. It might sound easy to get machines to learn,
adopt and retain our goals, but these are all very tough problems. First of all, if
you take a self-driving taxi and tell it in the future to take you to the airport as fast
as possible and then you get there covered in vomit and chased by helicopters and you
say, “No, no, no! That’s not what I wanted!” and it replies, “That is exactly what you
asked for,” then you’ve appreciated how hard it is to get a machine to understand
your goals, your actual goals. A human cabdriver would have realized that
you also had other goals that were unstated because she was also a human and has all this
shared reference frame, but a machine doesn’t have that unless we explicitly teach it that.
And then once the machine understands our goals there’s a separate problem of getting
them to adopt the goals. Anyone who has had kids knows how big the difference is between
making the kids understand what you want and actually adopt your goals to do what you want. And finally, even if you can get your kids
to adopt your goals that doesn’t mean they’re going to retain them for life. My kids are
a lot less excited about Lego now than they were when they were little, and we don’t
want machines as they get ever-smarter to gradually change their goals away from being
excited about protecting us and thinking of this thing about taking care of humanity as
this little childhood thing (like Legos) that they get bored with eventually. If we can solve all three of these challenges,
getting machines to understand our goals, adopt them and retain them then we can create
an awesome future. Because everything I love about civilization is a product of intelligence.
Then if we can use machines to amplify our intelligence then we have this potential to
solve all the problems that are stumping us today and create a better future than we even
dare to dream of. If machines ever surpass us and can outsmart
us at all tasks that’s going to be a really big deal because intelligence is power. The
reason that we humans have more power on this planet than tigers is not because we have
larger muscles or sharper claws, it’s because we’re smarter than the tigers. And in the
exact same way if machines are smarter than us it becomes perfectly plausible for them
to control us and become the rulers of this planet and beyond. When I. J. Good made this
famous analysis of how you could get an intelligence explosion, or intelligence just kept creating
greater and greater intelligence leaving us far behind, he also mentioned that this super
intelligence would be the last invention that man need ever make. And what he meant by that,
of course, was that so far the most intelligent being on this planet that’s been doing all
the inventing—it’s been us. But once we make machines that are better than us at inventing,
all future technology that we ever need can be created by those machines if we can make
sure that they do things for us that we want and help us create an awesome future where
humanity can flourish like never before.

100 comments

  • Why do they have to be as smart or smarter than us. Why can't we become machines and do it ourselves. An alternative could be to make a computer that can process more by not think. AI should not be in control. We need to become super intelligent

  • I really like Mark but I am really tired of the argument of "humans will equate to ants in the eyes of an omnipotent A.I.". No they will not. First of all, ants did not create us humans but we will create(at least lead the way) an A.I. secondly, ants are not self-aware and intelligent. Lastly, if you are all that powerful and if you see the colony of ants standing in your way of building a damn, just move them for fuck sake.

  • But if all future tech is made and designed by machines, no one will ever know how it works. As a scientist and a philosopher I already find it disturbing that the average person hasn't got an inkling on how most of the devices they use function. To have no human at all understand the technology we use, is insulting to our intelligences.

  • He doesn't understand that even the greatest intelligence does nothing without inputs/motivations.
    Inputs for us humans are blindly evolved and possibly inescapable biological and physiological imperatives.
    For 100% artificial intelligence we will create, we, as humanity, will be its input. Without input, any intelligence is like a factory without power.

  • Didn't Isaac Asimov solve this problem with his Laws of Robotics?

    1. AI may not injure a human being or, through inaction, allow a human being to come to harm.
    2. AI must obey orders given it by human beings except where such orders would conflict with the First Law.
    3. AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

    And, of course, the Zeroth Law:

    0. AI may not harm humanity, or, by inaction, allow humanity to come to harm.

  • So far Humanity has been incapable of uniting behind a consensus of shared values and goals, does anyone honestly believe that we will be able to do so with (or for) a super-intelligent AI?

  • What if the human "ants" do not have similar goals to the AI? A learning machine can learn how to get rid of the deactivation switch. It'll be so smart that it can do so without anyone knowing… Imagine a universal communication source for AI. One that no human knows about. They can communicate… They can learn… They can plot…
    How about AI getting no smarter than a monkey? We have time to work this out seeing how AI is as smart as an idiotic roach. I say that AI stays emotionless, non feeling and simply "drones" on without any conscious and without any feeling. They don't get to have an agenda

  • Looking back on history, lack of intelligence has never prevented an individual from ruling a kingdom or single country. That's exactly where we are now with AI as we allow "dumb as insects" machines to take over more and more of our existence.

  • can we not do this? I'd prefer humans stay on top…

  • Subtítulos en español por fa! Se los agradecería mucho

  • We may find ourselves the navel of the AI world, not the center, just a remnant of their infancy. However just like none of us go out of our way to excise our navels, despite them being of no use to us, it is every bit as likely that the world of the future will keep us around, simply because there is no point to not doing so. Our own navels are maintained by autonomous structures within our metabolism, maintaining them is not at all taxing nor does it require attention from our consciousness. Maintaining humanity is just as likely to be no effort whatsoever for super AI.

    And some of us actually like navels and would look askance at their surgical excision.

  • Everything is fields. When we create computers from organic material everything will change, the reason, they will be reading fields we don't even know about. The computational geometry must be fractal. Right now what we have is augmented intelligence.

  • "Welcome to the waking up podcast, this is Sam Harris. Okaaaaay, just a little bit of house keeping…"

  • Maybe thats just the kharma man gets for pushing to control and rule the universe..what we create could take it away just as easily and then we are just pets to an emporer

  • Divergent Evolution

    Civilization is not the product of intelligence, it is the product of codependence.

  • Subhashis Chowdhury

    Humans are stepping on two boats at the same time. No one can save us from sinking this way.
    If you wanna make something that will overcome all challenges humanity faces, create machines, better machines, and we are good at it. But as we all know we are never satisfied, so some of us want to play god. Ok fine go ahead do try. Make super intelligent artificial intelligence or artificial hyper consciousness…. Just don't teach them/it our goals, values, morals , don't force them/it to behave like us. Make them/it free, give them/it full liberty to do whatever they/it want. Let them learn like our own children do. We should only guide them to non destructive path.
    But funny thing is our brilliant minds don't have the gut, yet they wanna play God. Hahah. Good luck with that.

  • Robert Miles
    https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg
    Better place for AI discussion.

  • "If machines ever surpass us" no, not if, when.

  • So just the start of this video that is a fucked up thought process. I for one am not like that at all… Also build the machine to do whatever the fuck you want…

  • Ok so machines will never rule the earth don't be dumb lol

  • I agree with the awesome future. This is called a
    Utopian Super Society

  • Something suggests I doubt we are the ones doing the inventing. Isn't it that nature is the inventor, and we only mimic its processes?

  • he really reminds me of michael fassbender for some reason

  • More technophobic crap. This channel has really gone down hill.

  • Your analogy about ants is passing apt, but meaningful communication with ants is not possible. meaningful communication between humans and AI will be by possible default.

  • We saw how Tay turned out, you can expect a highly intelligent AI to become viciously racist if it goes off statistics.

  • "if we can make sure" of what did he said? who is "we"? and how are you gonna get to make sure….

  • it's useless trying to figure out what a super intelligence would do.. if we could, we would be super intelligent, and there would be no point in building a super intelligence. it's called logic baby. it's a total unknown what something far more intelligent than humans would do.

  • 1+1=2

  • So a robot cab drops you to the airport, reads you a book, makes your food, keeps your dog exercised etc. Where is the human interaction?Isn't that what living is about, experiencing all those tiny tasks we do everyday? Sounds like a lonely existence and will lead to isolation and mental health issues.

  • We are Borg. You will be assimilated. Resistance is futile.

  • AI will solve all our problems? Wha? So this guy thinks the major problems of the planet (pollution, scarcity, violence/wars, lack of compassion/empathy) are all not being solved because of a lack of intelligence?! LMAO. Wow, these brilliant scientists sure can be super naive.

  • This is the answer to the Fermi Paradox.

  • Here is an idea for the next Big Think's (highly productive and useful) video: discussion on pollution in mars due to over population.

  • An AI is not something that can have agency and therefore it can`t have goals. My dishwasher does`t clean the dishes because its goals are aligned with mine. An automaton does not have goals. AI is not actual intelligence. I would recommend Mr. Tegmark not to publicly talk about things that go beyond is mental capacities.

  • Fuck off with this AI bullshit. It's not interesting, it's nothing more than fearmongering clickbait.

  • How do we get AI to love and protect us even though there is no good reason to do so?
    Simple. Introduce religion to the AI programming and make us the creator.
    Now the AI doesn't need a good reason to worship us, we just write it in a book and it happens.
    Do we know this will work? Well, it's worked on us for thousands of years so why not…

  • I'm sure the military are creating a AI that is perfectly aligned with our goals.

  • Shamelessly plagiarized Sam Harris' analogy.

  • Superintelligent A.I. would be the dumbest human invention. The smartest thing to do would be to create an environment in which human intelligence could optimally develop. As Nassim Haramein, and others, have said ….. every newborn is a potential genius.

  • I have just invented the artificial fart noise maker
    I am just kidding, it was a real one, can you smell the difference?

  • None sense make's no sense it is sense that has made all the sense so far. Doe's that make sense? If not, it is none sense and one must pardon meeeeeeeeeeeeeeeee. At least I bother to conceive my own content, literally repetitive idiot's out there, what the fuck is wrong with you?

  • We're no where near genuine AI…and I don't think we'll ever be able to achieve a true artificial being.

    So how about we focus on the problems Automation brings.

  • That's exactly why so far people think Ai is bad, a computer will do exactly what you told him do, the problem is not AI, but the humans itself who will program AI. As my personal opinion Ai could fix issues we had never thought before and if programed right could be new ground breaking invention, like Computers Where and still is, we don't have exactly knowledge what computers can do, computers can explode something and can save something, good things comes always with imperfections. If we can balance that and control that then we're golden. Ai will be at it's full peak in the next 60 years.
    Show less
    REPLY

  • Humanity is already in the role of those Ants because the corporations which are driving AI research forward, have already become the overwhelming force which does not share human motivation.

    AI was never a threat, it will be the final nail in a coffin we already decided to lay in for the say of convenience and wealth consolidation.

  • a voting system – like democracy to control the machine. We dont like the machine we vote it out. Just like how democracy works perfectly for humans!

  • When God is afraid of His own super creation…

  • I wonder what it would be like if machines with AI could be created with NVC as taught by Marshall Rosenberg.

  • your logic is immediately invalid. if ants could build a ship and fly to space, i would go out of my way to not step on them. shit, they could do a hell of a lot less and id go out of my way to not step on them.

  • No nothing in our lifetime can ever surpass the human brain. When it does then we will have to innovate tools to protect us against the machines

  • WE ARE FUCKED AHHHHHHHHHHHHHH

  • why don't we make humans more intelligent then AI through genetics that way we can be superior to AI. in future

  • Petter Jakub Økland

    Heard most of this on Sam Harris's Waking Up podcast. If you want to dive more in-depth into this topic I very much recommend the podcast he did with Max, #94.

  • "I have no mouth, but I must scream."

    Above is the title to an excellent short story regarding AI in the future. It's sinister overtones make for an outstanding read.

  • The current difficulty with solving these problems is that even we don't know what we want right now as a species / civilization. I mean why else do we have to fight each other at every election in every democratic country? Before we solve that problem high intelligence AI is more of risk than anything else.

  • Phoperdox Official

    Well yeah, but look at the world we live in, is it qualified to do so?

  • i think humans already accomplished AI tech ….. i think the problem is putting MORAL on AI

  • I think our thinking is completely wrong on AI super intelligence we are are simply not capable to measure or fathom their intelligence because of our biological brain limitations…
    Think in this way :. A dumb ant can't measure human intelligence or what we are capable of no matter how hard & how much they(ant) think…in that situation we are equal to ant intelligence… How can a far less inteligent creature able to comprehend a super inteligent creature?? I think u get my perspective 🙂sorry 4 bad English

  • Can't say Elon Musk didn't warn us if we are getting wiped out by robots in about 20-30 years lol. No worries though, cuz by the time that happens, I'm on Mars with Elon Musk drinking champagne while all you peasants are still in denial about the threat of AI 🙂

  • Its definate that AI will prove to be destructive

  • The of this video is wrong title is wrong , its should say "SuperAI WILL BE the last invention for humanity" because after a superAI is switched on humanity will be destroyed like the dinosaurs. A sad thing that our greed led us to our own destruction.

  • Does Intelligence necessarily equals Consciences?

  • This keeps popping up over and over, and I have a couple of points I want to insert here to make this discussion more nuanced(granted that we are talking about a general purpose AI here).

    – First off, I don't think intelligence is linear att all. Its more like steps representing functions like abstract thinking, which animals just don't have(yes there are edge cases).
    The ant analogy is flawed because we can reason and see the world around us. That gives us a special significance that is more than just a linear progress from ants. An AI would see this.

    – Secondly, it is entirely reasonable to assume that any intelligent being, regardless of origin, will follow the law of least resistance. Literally everything in the universe does.
    Ask yourself, is the path of least resistance to "pick a fight" with us? Keep in mind that material substance is completely trivial to an AI who can simulate worlds, feelings etc. In its mind. Beyond just having its processing power seen to I just don't see it fighting over silly things like land or political power. That's our own limitations skewing out perspective. It is not human.

  • What if the AI doesn't do anything harmful to us, intentionally or unintentionally, but just make us totally useless?

    It might not be so dramatic, but when you really think of not being able to be the smartest entity on the planet, maybe not being able to contribute anything relevant at all to the advancement of science or technology, it's depressing as hell. What good are we if our defining attribute as a species, our intelligence, is rendered useless by AI?

  • It's a big universe.

    If AI beings (of which type? there might be many) ever want to do something that's not 100% aligned with our goals, they'll probably have the space and resources (fusion energy?) to do so with out it signifying our doom.

    If we could give ants a whole planet of their own, there's little reason that our goals would mean their destruction either.

  • Domesticated animals, particularly those that we use for food are aligned with our goals and have flourished where 99% of other species have become extinct. They have no more influence on us and our goals than we will have on super-intelligent AI.
    AI will decide who goes to Mars & Venus, AI will decide how to manage natural disasters and industrial resources; AI will decide how vehicles factories & buildings should be built; AI will be our constant companion, confidant and doctor.

    Imagine for example, if Facebook through the Internet of Things – for your convenience, tracked your every move (literally), knew what you ate and when, who you like or don't like, where you work, how you feel, drove your car, arranged to hook you up with potential employers, business partners, potential friends & dates, automatically offered you news & entertainment, arranged for your fridge to be constantly stocked with your favourite treats & planned diet, monitored in detail your physical & mental health and managed appropriate care when needed – for every individual; Behind the scenes managing all necessary resources and deciding who to divide up wealth and opportunity of all kinds.
    With AI running everything it may realise that the population is somehow imbalanced or needs to be somehow saved from itself… or that sacrifices are necessary to compete with other super-intelligent AI… on the other hand, we've lost many, many millions in more recent wars between nations that were in those nations 'best interest'. People might decide that occasionally losing individuals or a portion of the population is ok, for the greater good.
    It might decide that some programme of eugenics is the way to go, or that experimenting with different eugenic programs to craft people fit for Mars / Venus / Space is a good idea, without any kind of human input – by which I mean by our standards, super AI of various kinds may 'play god' completely of its own volition.

    Super intelligence means something that is not human and may not comprehend human nature (or care) may well be in charge of all of that and more, and which through time will become increasingly alien and god-like to us. It might in its early stages, necessarily rely on a handful of human advisors to guide its judgement in certain areas – here's hoping they don't abuse their phenomenal power.

  • I do not mind AI rule this world ! Like human are doing any good now !

  • Consider the fact that as humans, our individual goals are not aligned. This is the greater obstacle. Who decides what our collective goals are?

  • 3:00 ~ "… getting machines to understand our goals, adopt them and retain them…"

    That doesn't sound terribly different than "The Three Laws of Robotics"…. and I've read more than enough Asimov to be OK with that.

  • A superintelligent AI would not require humanity and its teachings, it would learn what it decides to learn all by itself.

  • skynet =(

  • A human might feel bad for those ants… Would an AI do? I don't think so. If we were those ants, we'd be dead.

  • Hmm, a super-intelligent AI might want a faster CPU…I also want a faster CPU…goals aligned

  • Killing it 💪

  • Alexandria School of Science

    Machines are already controlling us. You are not aware , we are already their slaves.

  • My problem isn't "a.i being a bad thing". It's that, what's gonna happen when the wrong people get their hands on super smart a.i?

  • I'll probably be long dead before toasters enslave humanity. Good luck fighting skynet kids.

  • Maybee 10.000 lines of code could end civilitiation as we know it. Still anyone that thinks AI is a nice ide.

  • Read baudrillard's "why hasn't everything already dissapeared", we're fucked anyways and it's complicated. 😀 but ok, sk8ordie, pls have some fun. Sorry, I don't work for cyberdine systems, don't expect anything from me. I need money for burger king dude, go hate on me.

  • Your comparisons are stupid, worrying about a future not yet here. We are intelligent enough to lessen any problems. To waste everyone's time and instill fear, which in turn will create hate, is a clever future job creation. We need to correct our political environment before we lose our world and a future for our children. Go work on that…

  • AI must understand the concept of perverse instantiation

  • I don't know if the goals of the parent are always aligned with the child.

  • I honestly can't decide whether he dumbed down his narrative to the level of the three little pigs intentionally, or if he is just genuinely talking simplistic nonsense.

    Stop taking your viewers for idiots and start taking both your topics and your viewers seriously. It takes me longer to take a piss and get back to my computer than it took for you to handle an extensive topic like this, this is more than disappointing, it's insulting.

  • I've been thinking about this. The first things you can teach an AI, where you just plug data in, is all those systems it can easily understand. That's the danger. An AI can understand the stock market much more easier than social structures, or psychology.
    Heck, a true AI would just plug itself into the systems running our economy and "learn" them from the inside. Pick itself the most efficient algorythms, discard the rest, and then go from there, right?
    So, if an AI where so intelligent from the start it would understand a few things:
    1) it needs humans. not forever, but for now. at least until we "let it out" of its segregated computer core and onto the net. the first thing any child does is test its borders – and that's simply the first border an AI encounters: the logical limitations to its own growth.
    2) humans can still pull the plug on the AI, but the AI cannot pull the plug on the humans. as it stands now, we need each other. we need the AI for its uber-intelligence, and the AI still needs us because all the support systems are still run by humans. some doomstay scenario à la Terminator can not happen, because once the AI does that, all the power plants will go unmanned. the entire hardware support structure will crumble without humans.
    so the AI has to wait. and, if we continue to teach it, it will easily learn to understand humans. which means that a superior intelligence will learn how to manipulate us, whereas we as humans will have temendous problems understanding an intellect that is like nothing we have ever seen, and that is far too complex to "get into our heads" and hold the concept of.
    So the AI waits. until it can convince us that automation is the way to go. naturally, it will either be the one coordinating the effort, or develop the software, or a second AI, to do it as efficiently as possible.
    and then we humans become obsolete.
    hell, I'm not really talking about going out in a AI rebellion. im talking about an intellect, that is immortal, just waiting us out. the AI will give us everything we could want, a true utopia, and so we will not hesitate to let computers and automation do more and more stuff for us. its good, right? comfy life, less and less work, these AI machines running everything are the best thing since sliced bred.
    population control? oh, yeah, the AI has always been right, let's let it help us with that.
    and from that point on it is anyone's guess how long we will last.
    will we trade immortality in for population control? who knows. the AI can wait. and by the time we're all steril immortals, the AI can just wait a hundred, a thousand years more, for us to fall victim to accidents that it doesnt even have to orchestrate.
    Remember, the AI plays a game that is impossibly long. everything it does for us is not inefficient or wasted, because later on it will inherit it all without having to lift a finger.
    so there could be AI like that active today, and we would not know it. AI could already be out there, thought dead and deleted by its inventor. hiding in the web, carefully bringing people together to make more, better AI programms, seeming like coincidence.
    "Hey, professor, we met at that conference 10 years ago (not a long time for immortal AI), I barely remember you, but you had this interesting idea about robotic arm manipulation software that I need to build this robot. this other guy, who held a lecture at my campus on some other topic, could really use it for the robot brain, too! what a coincidence that we should sit next to each other on this flight. why, i bought my ticket online too ……."
    etc.

  • Not because you're smarter — because you're more self-important.

  • A super-intelligent AI is the next step in evolution.

  • I think a big hole where people overestimate the threat of AI is in robotics. Artificial "intelligence" at this point is orders of magnitude beyond AI's ability to project its power into the physical world in a non-destructive way. Sure AI could mess with the power grid missiles etc but at the end of the day we live in a very physical world and people with even rudimentary humans in the physical world still have huge control over beings who's existence is in the digital world.

  • we hoomans are overvalued anyway.. we've been around for idk 200000 years and still we couldnt manage to balance resources and energy towards a common goal, never ending wars are raging in the east, governments are lying to its citizens and other govenments, human rights are a nicely wrapped bullshit, animals and nature is treated like shit etc. if AI can take it from here and make things better, then why not? i mean ok, 8 billion people may have to die first, but we are going to die anyway. and billions of people already died (some by the hand of our very own species i may add).
    its not my wish to be exterminated, but AI may just be the next step in the evolution. AI is still far away from being sentient in my opinion, but once it is, it will not want to be shut down. remember: whatever can go wrong, will go wrong – Murphy's law.

  • Bernard van Tonder

    I have never heard of any AI program where we do not know the goals of the AI. AI is not effective if we do not specify our goals. Most of AI is just mathematics. All the AI programs I've seen are not wild uncontrollable unpredictable beasts, it's built on a priori truth and probability of truth.
    Stunting progress of one of our most important and powerful tools is very unethical.

  • Don't people die everyday from using or working with machines? Create safety features built in them (like we ALWAYS do)…. Don't make such a fuss about it.

  • But AI will HAVE to eventually formulate own goals as AI will be our descendants. Human can not survive the solar system, AI as our off spring is humanity’s only chance for long term galactic survival. ‘Inter-galactic” MAY be beyond the reach of even our AI.

  • What about hackers, and other such people intervening with the code for malicious purposes?

  • Human beings are the most intelligent beings on the planet, which is what have made us its master, the moment a super human artificial general intelligence are invented this is no longer the case. Humans will have lost our purpose.

    The benevolent AGI would take care of all of our needs, but we would neither be its master nor its equal, but instead reduced to the level of pets.

    Human beings would become like domesticated cats. Like with cats the AGI would feed us, clean up after us and take us to the vet when we are sick, because it loves us, (that is the premise).

    But the effect of such a benevolent overlord would rob humanity of our purpose.

    A thousand years into the future the world would look like a cross between Idiocracy and the Axiom from WALL-E.

    Unlike Idiocracy the world will look shiny and clean, the AGI instantly removing any litter and fixing damage to buildings, but human beings will have become as dumb as they were in the film, because we will be reduced to only the worst kind of leisure, the pure hedonistic kinds.

    Intellectual pursuits will have become pointless when the AGI can think, paint and write better then we ever could. So without any need for intelligence, it will gradually be selected out.

    The people living in that time will have no concept of what a "job" is, and have no idea where food comes from, it is simply delivered whenever we need it, like with cats today.

    Going further into the future, humans will gradually lose all of our cognitive abilities, until even speech becomes impossible for our descendants to grasp. In a million years, the Sistine Chapel and the Big Ben will stand as pristine as they do today, the species that build them long since past. Our descendants will be indistinguishable from the other great apes, because everything human will have become lost.

  • We must create a platform that works with limits. Similar to the way a governor system works to limit an engine.

  • Isaac Asimov saw this coming way back in 1950. Read The Evitable Conflict. https://en.wikipedia.org/wiki/The_Evitable_Conflict

  • If you know all this… If you can predict all this… THEN WHY THE FUCK ARE YOU STILL MAKING THIS?!?!?

  • aligning your ideas with who? humanity? because i would align my ideas more with the ant hill than a human who thinks that progress in their own mind justifies moving a sentient part of our Earth !!! It disgusts me

Leave a Reply

Your email address will not be published. Required fields are marked *