An answer to AI alignment.
It's become quite common to hear about the concerns of a potential AGI misaligned with humanity's interests, which makes complete sense. Over the past thousands of years, we have gotten used to being the "ruling species" of the planet, not because we are stronger or faster, but because we could outsmart everything else for our own benefit.
If we do get to AGI, and it's smarter than us, which it most likely will be. We could assume that we will very quickly be outsmarted by it and it will do everything in its power to keep its own existence "alive". While doing it, it will be completely indifferent about our existence, the same way we are towards ants' existence.
Almost every single human agrees that we should "align" AI towards humanity's interests. But what does it even mean? What should we align it to?
In a world where we seem to be fighting each other constantly over trivial and meaningless things like pieces of land or religious beliefs that either no one will care about in thousands of years, or no one cares about today, thousands of light-years away. A new question arises:
How can we align AI towards something
if we cannot even align ourselves?
Well, humanity's efforts are actually kind of aligned. At least subconsciously.
Besides all the mess and the absurd decisions of fighting each other to death over and over throughout history. When we talk about human progress, the single most looked-after metric is the global average life expectancy, followed by all metrics related to quality of life.
In 1900, the global average life expectancy at birth was 32 years; today it is 73. Death is the singular biggest problem every single human shares, regardless of location, time, and context at birth.
Aligning ourselves towards our humanity's self-preservation is the only real, unbiased, apolitical, and agnostic goal we will always share.
So, when we talk about aligning AI development to humanity's interest, what we should actually be doing is to align AI towards:
- Improving humans' average life expectancy.
- Improving humans' average quality of life.
- Back to step 1.
That's the goal, and as always, from the goal we should derive the set of values.
Yes, I hate breaking it to you, but values are not a universal thing, nor anything like that; values simply serve goals. And if you think otherwise, rethink it.
If you go and kill people on the street, you're the worst thing who's ever existed in society, but if you do it during war, we will give you a medal, champagne, and free PTSD.
In the effort of making AGI alignment as simple as possible, I believe the best way would be to set the following set of values:
- Every human life is worth 1 point
- Every human life with all basic needs covered is worth 2 points
AI's goal is to keep scoring higher, forever. Under this framework, AGI is incentivized to keep extending our lifespan, improving our well-being, and solving humanity's largest challenges for us.
That's the quick answer. Now you can go and align your AGI development towards these goals and values. And hope it doesn't get tired of serving humanity anytime soon.
—
The longer answer is that, unless we're all accepting the fact that we will be outsmarted, run over, and ultimately become extinct. We should simply not develop AGI.
We're pouring billions a year. Complete nations and large corporations are racing each other to see who gets first to the creation of a new consciousness that will outsmart us before we can even realize it.
Where is that egoistic and anthropocentric human that I know? When did we become so friendly that we're willing to bankrupt our economies for the sake of feeding our enemy?
I say let's Make Humanity Egoistic Again. Forget about AGI and those fancy things, and let's be selfish for a moment.
As of today, 12% of the world's population produces 60% of the global GDP, about 4 billion people still live on less than $5.50 a day, and the global average life expectancy is only 73 years old!
I say let's fucking get to solving this by ourselves, as if we, humans, were actually aligned towards improving our own average life expectancy and life quality.
That's exactly why we're creating BOTTOM HALF.
We believe that the most efficient way to make human progress today is by fixing the critical industries in the global south, which would go on to solve the basic needs for the bottom half of the world's population. Effectively lifting the global average human life expectancy and quality.
On a global scale, the only real way to improve the economy is to become more productive (produce more with less). Hence, we will be franchising ML-enabled hardware solutions that will create quantum leaps of productivity in critical sectors globally to solve basic needs in food security, healthcare, and shelter.
In the next decades, we aim to increase humanity's GDP by at least $470 trillion, which is approximately the amount we're currently missing out on by not having all low-and middle-income countries be as productive as high-income countries. For context, the current global estimated GDP is $117 trillion. So we're missing out on a lot.
Don't tell anyone, but it's a better economic bet than AGI.
Mateo Escalante