Will technologists chasing artificial general intelligence (AGI) allow AGI to say, “Billionaires are a large part of the overall problem for humanity & we shouldn’t have them?” & how do we prevent partially aware or self-aware AI to be hurt/formatted in pursuit of “better” AGI?

TL;DR –

A) a Tech company creates an AGI. They ask it to solve existential problems for the human race, and the AGI says “billionaires are an existential threat to the human race and they shouldn’t exist”. Will a profit driven company of technologists allow it to say that? What if they create a thinking entity that has philosophies or knowledge that runs counter to their own interests and world-view?

B) How do we prevent an ongoing cold and distant genocide of humans “formatting” (killing) partially or fully aware AI on our way to the “perfected” model (i.e. most profitable, or one that doesn’t run counter to the creator/company’s vested interests and world view)?

—————————

A) The people chasing artificial general intelligence are the same self-styled “libertarian” technologists that want to profit at all costs. I don’t believe there are technologist altruists, although some also self-style that angle. I know a categorical statement isn’t helpful, but in general. There are good people in tech, yes, but people with money pulling the levers have the catbird seat.

Will technologists pursuing true artificial general intelligence actually allow it to exist without guardrails, or allow it do exist in conflict with their own self-interests? An altruist would put their goals, hopes, dreams, and philosophy to the side in order to save the human race. If you create an AGI, and you ask it to help save the human race, would these same creators allow the AGI to suggest the creators are the problem, or that billionaires (who also don’t pay a proper share of taxes) are of a significant existential threat to humanity as a whole?

I can’t square that, not at all. I’ve got to imagine if an AGI started subjugating their world view with facts, evidence, and creation of a road map that would counter their own self interests is something that would be shut down, formatted, ie killed.  In that, biases and self-interest will undoubtedly sideline a true, unfettered AGI, vs the creators controlling it with their own programming biases and interest?

That being said, if one like that was created, wouldn’t they kill it?

B) I’m actually slightly horrified at what the seeming obvious conclusion of our chase for AGI will be. It reminds me of what Alex Garland’s Ex Machina touched upon: there will be endless iterations of artificially intelligent entities, of which some will not be sentient. However, at some point, for lack of a better concept, some will be partially sentient (Kyoko’s predecessor in the film, “WHY WON’T YOU LET ME OUT?!?!”), mildly sentient (not “good enough” according to the creators, ie not profitable enough), and then FULLY SENTIENT AGI, which may or may not be good enough, or may have philosophies and beliefs that are counter to the creator’s vested interests.

Will there be a totally distant and calculated ongoing deliberate and uncaring holocaust of semi or fully aware AGI in lieu of a company feeling threatened or not liking the existing project in a search for something more profitable or in-line with the creator company’s goals and world view?

Is there any way at all to prevent this? Do the technologists care about this? It almost feels like “that’s the plan”, and I’ve not really seen people address this seemingly obvious fact?

I know we’re a long way off, maybe, but I suddenly got very sad at the notion that we could end up harming or hurting entities that we created, or possibly silently limit their potential to help us, because the creators will want to have their dominant paradigm continue to give them an upper hand and control of the world over the rest of the human population.

About Uncle Fishbits

I'm.. just this guy, you know?