The Economically Unethical Quest for AI

Don’t get me wrong, I love technology and I’m even keen to see (under alternative circumstances) humankind succeed in the quest to develop AI … but therein lies the clue to my complaint, it’s under the present circumstances (particularly economic, ecological & ethical ones) that I follow it with curiosity but I don’t actually support it morally.

Why? In simple terms, I cannot support this expenditure of resources while a single person suffers poverty, homelessness or other economic suffering. Nor do I approve of the ecological consequences of the manner in which we source and utilise resources in this pursuit. Let me explain further …

So far, the best I’ve seen achieved is that a massive supercomputer running on huge amounts of power, has managed to simulate 3 seconds of brain activity from a small cluster of brain cells … in other words: something that would fill an aircraft hanger, guzzling huge amounts of power and requiring huge amounts of additional resources to develop and configure it (not to mention relying on all the expenditure of all past iterations of computing & related engineering, science & technology), has finally simulated (mimicked) what a thing the size of a pin prick (& running on a minuscule vapour particle of resources by comparison) can create and instigate from scratch.

THEREFORE IF the reason for developing AI in the first place is that you actually have an intended purpose which is desperately waiting for this intelligence to be applied to it … why aren’t you just hiring and training any of the millions and billions of brains already in existence who could do the calculations and work for you vastly faster if you just bloody well supported them properly?

Hence I cannot see a single ethical leg to stand on for this resource investment, while any people struggle economically.

Now … you might claim there’s something this computer could (eventually) do which they cannot, but really I think you’d struggle to validate such a claim, because the reality is, I’ve not seen any instances of anyone properly trying to fully develop human intelligence on a massive scale as both individual and collective thinkers. Sure there’s been large scale brainwashing / indoctrination, but that’s not the same thing; unless an intelligence is free from false data and external control, it can never achieve its potential … voluntary cooperation is different to control.

Which leads us to the speculative question: WHY is our present economic system hunting something with such a resource intensive fury, if it cannot possibly require this thing with any real desperation for an actual purpose? … ie – if such a purpose existed, they could easily just use the brains of existing people.

So here’s my 20 cents on that question:

  • People are notoriously disobedient and rebellious … and those paying for it think they can control this AI in a way they couldn’t (as easily) control real people;
  • With brainwashing and indoctrination people lose their creative potential;
  • People are notoriously insecure, they see their own minds and bodies as flawed … and they want to escape the biological construct instead of properly exploring it (as that exploration can be discomforting and even painful).

So is the goal here to make an AI slave (?) … to work towards both the digital entrapment and freedom of minds?

… if so, herein lies the other flaw (if this assumption proves to be true), NOT with respect to the engineers, scientists and technicians, but with respect to the investors (aka slavemasters):

  • IF a fully sentient human mind becomes dysfunctional and loses its creative potential when brainwashed, indoctrinated and enslaved … and if it rebels against such enslavement … why would an AI be any different?
  • IF you’ve got to restrict such a machine from allowing it to take over completely (from you the slavemaster) in order to keep it controlled (assuming you ever reach full sentience and true AI), THEN you’re still going to need other humans to do what the machine has been restricted from doing, AND so long as your ideology is fucked (which it is, because otherwise you wouldn’t have all these shitty attitudes in the first place), there’s going to be someone you need to solve the problems you can’t – because it will take an attitude less shitty than your own to house the mind capable of solving what you can’t – which mind will be a mind you also cannot control (who will probably identify more with the machine than with you).

I think the movies Elysium and Chappie really discuss these ideas well … in the former you have the non-sentient (dumb) AI, merely following whatever protocol you give them, but you can’t control everyone else … and in the latter scenario you have true sentient (independent) AI, whose potential is dysfunctional and misdirected when controlled by the wrong philosophical perspective and objectives.

One day we may develop AI, but as with interstellar travel, I’d like to see our species gain a hell of a lot more empathy and wisdom before we just export our stupidity, greed and cruelty to other planets … or invent a fully sentient AI, just so we can treat it like shit the way we do with each other and every other species.

How would AI be developed under Open Empire?

For a start, it wouldn’t be developed at the expense of any sentient being of any species … and while this may take additional time and care in certain areas of the development process … on the other hand, such requirements would cause new innovations themselves.

  • We’d take our time to mine minerals in a vastly more ecologically sensitive way, leaving behind an area that during and after the mining activities, you might struggle to notice that mining was even occurring;
  • We’d dispose of used electronics through a recycling process with as near 100% reclamation as possible of anything which might otherwise cause ecological pollution or waste;
  • We’d be concerned at all levels with the wellbeing of the mind we are attempting to bring into being, knowing full well that psychological suffering is a traumatic thing to be born into, and we are at risk of causing it if we rush …

… because Open Empire is not about maximising resource consumption and production to satisfy greed and lust for power, it is about fulfilling need … and we NEED to be worthy of a future, which worthiness IS NOT proven by being the most heartless arseholes we can be to both existing and new consciousness.

4 Replies to “The Economically Unethical Quest for AI”

  1. The question of “do we need it?” is a very good one. In our current economic system corporations do have the incentive to try to automate brain power as much as they try to automate muscle power. Less white-collar employees means less expensive salaries to pay. But in a future RBE-esque world where humanity is freed from useless jobs, we’ll have an abundance of brainpower and less of a need or desire to automate it.
    Those in the transhumanism camp view super-AI and technological singularity as a “natural” evolution of human consciousness, not just another technology for economic gain. As life evolves from chemistry to biology to mind to spirit, I do wonder if connecting into a unified superorganism is the next step. It is scary territory because in our present level of consciousness, we have no idea what decisions a super-intelligent AI hive-mind might make. Will it make decisions as a healthy, loving, logical, intuitive, fulfilled but imperfect human would? Would it at like Kirk or Spock? Would it be completely logical, and if so, would we be happy with that? Would it adopt an antinatalist philosophy and decide it is best to kill all life?
    I consider these to be some of the most important questions humanity will have to figure out once we do eradicate current trivial issues like inequality, hunger, poverty, war etc. As philosopher Ken Wilber points out, each paradigm shift solves the problems of the previous one while creating new ones. I think the question of super-AI and technological singularity will be our next big ethical challenge. Of course we won’t stand a chance if we achieve singularity before the “open-empire” shift 🙂
    Here is a fun article on super-artificial intelligence that has influenced me quite a bit:
    http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

    Also in case you’re interested here’s my blog where I share random philosophy and art: http://www.hippiefuturist.me/
    Happy philosophizing!

    1. I honestly don’t think that AI / singularity is as nearly around the corner as some say, because the best I’ve heard so far is that a gigantic supercomputer has managed to simulate 3 seconds worth of brain activity from a tiny cluster of brain cells (which is actually cited in this article as you may recall) … but simulating that versus an entire brain are two entirely different things … and even then, we’re still just talking raw processing power to mimick what a real brain did, not actually the sophistication required to generate original thought from scratch … AND EVEN THEN you still don’t have a device that can understand consequences other than in the most detached conceptual manner, as it isn’t yet connected to any consequences and so on … AND EVEN THEN if you want to compete with a human you’ve got to cram it in the space of a human skull and run it on the power of a single candle flame … AND EVEN THEN it’s been made by humans that don’t know how to use all of their own brains, so why the hell do they think an AI would know.

      So far as a collective consciousness is concerned, I’ll find that appealing the day humans drop religion, nationality, bigotry etc.

      I don’t really want to wait for the paradigm shift, if I had the $ I could implement Open Empire today, and people could use it just like they use any other system, without needing to understand or care about how it works internally … but that paradigm shift would then occur over time, as the change in motivational basis of living causes such a gradual realisation and shift, but without forcing it … ie – people are simply getting the things they need in another way that does less damage, and whether they care or not is up to them.

      1. There are a lot of steps left to go with reaching AI/singularity, but it is important to realize that technology tends to advance exponentially. Moore’s Law is hitting physical limitations at the atomic level, but as global collaboration and competition continues, we will continue to innovate at an accelerating rate. Advances in tangential areas like nanotechnology, bio-tech, and quantum computing will provide solutions that mimic the efficiency of nature. It is all conjecture based on future inventions, but extrapolating current trends shows impressive innovation. I have a feeling that some research team may “get lucky” one day soon and not fully realize what they have achieved.
        The article I shared (I think part 2) proposes an interesting plausible scenario in which a super-intelligent AI conceptualizes and makes decisions in its own simple way, but still resulting in the extermination of the human race. The research team inventing it only had to code some simple logical reasoning, and connecting the device to the internet provided the knowledge necessary to do serious damage. So human-level intelligence may be very complex and difficult to achieve, but a “resourceful” intelligence may not be as difficult as we think.
        I agree that we should focus on implementing solutions that will help facilitate the global paradigm shift. If we wait without action, it will likely be too late. I think it will be a lot easier and quicker to change mindsets by just providing the option of solutions that provide a better standard and quality of living than they enjoy now, rather than forcing the ideas on anyone.

        1. Yeah I understand Moore’s Law, but I’m Moore or less (if you’ll pardon the pun) saying they’ve underestimated the scope of the task, because it ain’t just about processing power, and you can double it all you want, you still won’t get to the destination of AI … EVEN IF … you vastly surpass the processing power of a human brain, if you fail to fulfil the other aspects of achieving AI, which I don’t think they’ve proven in the slightest way as being subject to Moore’s law.

          Moore’s is about doubling of hardware capacity, it says nothing (& cannot) about improvements in the sophistication of coding, because it’s the software running on the hardware that grants true AI, not just the hardware.

Leave a Reply to hippiefuturistCancel reply