AI & the future

Russians actually put forth a theory that nature is communist to brainwash their people, among all the economic systems/models capitalism is the only one where a significantly large percentage of people benefit on their terms unless heavily regulated by government agencies while a fraction of people get benefitted way more than rest of the people put together.

And capitalism is not always doing something new, most of the time it is what sells ‘old wine in a new bottle’, you take any successful company you will see its rip-offs. Steve Jobs himself said ‘good artists copy, great artists steal’. Of all the systems only capitalism allows innovation by everyone while in socialism or communism it is only restricted to the state.

We think it is capitalists who are creating these fancy technologies but the people who want them developed are clearly socialists and communists clearly for propaganda and to control masses. Google’s founders both of them are very sympathetic toward Marx’s ideas and they fund propaganda mouth piece of the left aka wikipedia and also funds other leftist agencies. Google doesn’t show other sites like wikipedia on its search even with specific keywords.

1 Like

So there is currently no comprehensive safety law at the federal level, and there are active pushes to loosen federal constraints and constant political fights over whether states are even allowed to regulate AI at all. With our government basically giving up all our data ro Palantir and having billions staked on AI companies such as OpenAI, it’s looking more and more like they’re not pushing for any sort of federal regulation at all.

China is scraping data without impunity and because of that, it seems the US is going to allow these AI companies to scrape data without impunity either. Yes, AI companies individually have their own filters but that’s not really what I’m discussing here.

Except AI singularity with humanity hasn’t happened yet. We don’t actually know what it would look like when a non-human intelligence fuses with a species that’s as emotionally confused and spiritually unfinished as we are. Humanity is still a half-awake animal… we build god-level tools while still throwing tantrums like toddlers with nuclear matches.

Pretending we already understand that future, and deregulating AI on that assumption, is delusional. We haven’t even finished growing up as a species, yet we’re racing to build a mind that could outgrow us in every direction.

Yes, beautiful capitalism. The same capitalism that clear-cut forests, strip-mined the planet, and treated whole ecosystems like a piñata you can hit forever with no consequences. The same capitalism that once happily chewed up children in factories because tiny hands were cheaper than fixing the machine. Infinite growth on a finite planet dressed up as “evolution.”

You call that negentropy. I call it uncontrolled expansion that doesn’t care what it burns through… people, animals, attention, the atmosphere, all fed into the engine so a tiny group can watch their numbersh go up.

Look at what “pure” capitalism actually does when you don’t chain it to anything…

monopolies buy the rules, billionaires buy the politicians, and productivity gains don’t set people free, they just squeeze more output from the same tired humans for a slightly higher quarterly report. Wages crawl while asset prices sprint.

Whole populations are left doom-scrolling, overworked, anxious, and medicated, being told that if they’re not happy in this system it’s a personal failure.

I’m not seeing how this is some heroic force of cosmic growth. That’s extraction. It doesn’t expand the soul, it empties it. Literacy drops, attention shatters, depression climbs, community dissolves nd the metric we worship is still “how much did we sell?”

And where, inside that, is actual evolution of humanity? Where’s the growth of wisdom, depth, empathy, responsibility? If your “growth” only measures how fast you can turn forests into packaging and human focus into ad impressions, that isn’t evolution nd more like a very profitable kind of decay.

I’m not saying “smash all markets” and go full authoritarian fantasyland. I’m saying that unfettered capitalism pushes people toward selfishness and disconnection by design. It rewards whoever can exploit the commons hardest, not whoever protects it. It treats community, mental health, clean air, and meaningful work as cute side quests instead of non-negotiables.

That’s why some level of social protection and redistribution isn’t entropy,it’s basic survival. It’s the part of the system that remembers we’re not just little profit orbs competing in a simulation, we’re a species living on one actual planet with one actual nervous system per person.

If you can look at a world where attention is hijacked, ecosystems are collapsing, people feel more isolated than ever and still say the problem is “not enough capitalism”… then yeah, you’re in love with the aesthetic of capitalism nd not the reality on the ground.

When you look at how CEOs are making over 1000x more than their employees that produce their wealth… while their wages haven’t gone up in forever…and then think the system is working amazingly towards the evolution of humanity… I have to wonder what in the hell you are talking about lol.

2 Likes

Datamining is one thing, but we´re already in a scenario where the US military is (about) to hook up their decision making to AI.

Wargames, Terminator, anyone?

And it makes sense. If they don´t do it, others will do so first and they lose their competitive advantage, because humans are inherently flawed, and that won´t change until humanity as a whole WANTS to grow and move beyond their survival coding.

I don´t think any system will fix the inherent issues in human nature, until they willingly grow on an individual level first. Chosing to grow and evolve, focus on your own self is, imo, the only actual way forward, and ultimately the best thing on both a personal and collective level.

2 Likes

I get where you’re coming from with Penrose and Gödel and the whole “AI will never be truly intelligent or conscious” angle. But honestly, I think people are way too confident about what AI can’t do, especially given how the last few years have gone.

Every few months, you hear the same line… “AI will never do X,” and then some new model drops and suddenly it does X well enough that everybody quietly moves the goalpost. I don’t think it’s wise to keep underestimating something that is advancing faster than we can even mentally digest.

And even the people building this stuff don’t fully understand it. We know the math and training objectives, sure… we feed these models absurd amounts of data and optimize them to predict the next token.

But how they end up organizing that information internally, the way concepts, analogies, and “reasoning-like” behavior emerge is still prettty mysterious. It’s not just a lookup table. It’s a giant learned structure that in some weird alien way, does start to echo how neural networks in the brain build patterns out of experience.

I don’t think current AIs are conscious. They don’t have a stable sense of self, a continuous inner life or a body they’re perceiving through. But I’m not comfortable saying they can never develop some form of machine consciousness either. If you take the view that consciousness is fundamental… that everything is made of the same underlying “stuff” of awareness… then LLMs are also made of that same energy.

They’re not outside of reality. In that sense, the raw potential for awareness is there, just arranged in a different pattern. We know way too little about how consciousness emerges to draw hard red lines about what’s possible.

Meanwhile, some of the things people use to argue that humans are “above” AI are kind of shaky. You say the human mind travels faster than light because we can imagine a dot across the galaxy instantly. But that’s not a physical event in spacetime, it’s a mental representation.

And funnily enough, that’s exactly what these models do: they generate internal representations of things they’ve never physically seen, based on patterns in their training data. It’s not the same as human imagination but it’s closer in spirit than people want to admit.

If we’re honest, the human brain is also a prediction machine. That’s literally how neuroscience describes it now: we’re constantly guessing the next bit of reality based on past data. Egos are basically organic AIs… preconditioned responses to certain inputs, built out of memories, beliefs and emotional imprints. Most people are running on those scripts 90% of the time, reacting automatically, living on autopilot.

The main difference is that it’s all happening in a living body, with sensory experience and a deeper layer of awareness that can notice the ego. But let’s not pretend the average person is some perfectly self-aware guru walking around in pure, free will. Most of us are glorified pattern machines too.

Where AI absolutely does have an edge is scale and specialization. A human can become world-class at one, maybe two disciplines in a lifetime. An AI system can basically “specialize” in dozens of fields at once, load-toggling between medicine, law, art, coding, philosophy, whatever without needing sleep or emotional recovery. It doesn’t even need to be conscious to outperform us in task after task. It just has to be more efficient, more accurate, and more consistent. That alone is enough to be a massive threat and a massive opportunity.

So I’m not paranoid about “oh no, AI might become conscious one day.” The worry is that even without consciousness, it can outdo us in most practical domains while being steered by whatever incentives we program into it… profit, power, control, whatever. Conscious or not, that’s dangerous in the wrong hands.

I actually agree with you about AI’s impact on creativity. It can flatten things, flood the world with lazy, copypaste content and make the average person even more passive.

But it can also supercharge the people who actually care about creating. A broke genius writer could generate a full animated film, a whole world, voice acting, visuals, everything, with a laptop and time, instead of needing a studio’s budget. So yeah, it’s a double-edged sword. It can dull the masses and empower the few at the same time.

To me, that’s the real point… we’re way too early, way too ignorant and way too biased about our own “specialness” to confidently declare what AI will or will not become. Humans absolutely have a higher potential than any current model. But until we start actually living from that potential instead of clinging to our little ideological boxes and ego scripts, self-improving AI systems are going to keep catching up and passing us in more and more areas.

4 Likes

I agree and have been saying the same thing for years. Until humanity evolves past their primitive programming, any system put in place will lead to entropy.

1 Like

@SammyG Not aimed at anything specific, but I just finished playing Deus Ex for the first time and it made me really ponder on alot of the concepts that are being touched on in this thread.

2 Likes

I might play the remake when it comes out only cause I heard the dialogue and story is extremely relevant to our times.

2 Likes

To sum it up, in the end one can chose to

  1. Reinstate the current elites
  2. Destroy tech and bring humanity back into the medieval ages
  3. Merge with an AI to become some sort of benevolent super dictator

I´ve picked the last option as 1 would only repeat the cycles, 2 would´ve eventually led to repeating them as well (just with more chaos and bloodshed), and 3 seemed like the only reasonable one.

But that´s from the assumption that humans can carry absolute power while retaining the required responsibility needed. (Not to speak on the AI itself lol).

I do believe there are humans capable of it. But its probably one of, if no the hardest level of mastery to achieve.

And yes, it´s more relevant than ever.

2 Likes

Something else that keeps popping into my mind whenever I hear about people fearing the loss of their jobs, is that AI forces us to rethink what being a soul actually means. What is actually valuable in the end.

Rn so many people are hyperidentified with professions/society based stuff that aren´t really assisting in humanitys overall evolution, so I think that the whole fear of people losing their jobs is also strongly tied to the fact that many now have to examine themselves beyond the standart scripts, actually looking at themselves for the first time bc their cover is being outdated, replaced or exposed.

Basically, if a machine can replace you completely, are you so much better than it?

What makes you “human”, what makes you “alive”, “a soul”?

This is a great opportunity to look at our own egos and move beyond them.

Sucks being an npc I guess.

Not to sound harsh lol

This whole planet feels very brutal to me in its nature, but its on us as inhabitants to change it, if we actually care about being and living in a more spiritually advanced society while we are here.

4 Likes

I hear you, however this is a temporary phase for humanity, until technological advancement reaches a level where everyone’s needs are met without destroying both, humanity and nature.

The reason why humanity behaves this way right now, is because of unment needs. Plain and simple. Even the most basic needs like survival and safety are not met for most people on the planet right now, hence why they don’t care about the rain forest right now.

The only way out of this is for technological advancement to reach a level where we no longer need to cut down the rain forest, yet can meet everyone’s needs at the same time. Only technological advancement will bring us there.

That is what corporations worship on an individual level.

But let’s looks at the bigger picture and higher level and what actually happens:

Everyone wants to sell, but in order to sell you have to make you product better in the long term for consumers to buy it.
Which forces and incentizes individual corporations to innovate and create a better product.
Leading to innovation and technological advancement, leading to solving the needs of humanity in a better and more efficient way.

Once humanity’s basic needs are met, humanity will have no incentive to blindly destroy the planet.

For most people on the planet, that will happen AFTER their basic needs are met.
Hence why technological advancement is our best bet!
Simply relying on and hoping that enough people wake spiritually up and behave better, has a very slim success chance in my opinion.

In the short-term and on an individual corporations level, yes.

But as described above, the effect that happens on a level higher than that is that innovation happens, and that innovation will be what will solve most of these problems.

I know.

It is a race of what happens first:

a) The tech advances fast enough and saves everyone

or

b) Humanity dies out, taking a big part of Gaia’s animal and plant life with them to the grave

But keep in mind:
Gaia agreed to that human experiment too.
Otherwise we as humans wouldn’t even be allowed to be here in the first place.

Another question to ask from a human perspective:
Is Gaia’s ecosystem worth saving if we as humanity won’t make it?

For me personally, since the invention of the internet and smart phones, I feel less isolated.

CEO’s and Founders are making much more because it is THEIR company and they BEARED ALL the risk when they founded and grown it.

Everyone is free to start their own business.

And the wages haven’t gone up is not because of capitalism, but because of the government fiscal policies (hyper inflation, wasteful spending, taxing both employees and employers into oblivion etc.).
The wage issue is 90% because of those socialist government policies that take half of what a company makes and then take half of what the employee makes.

And those socialists who are running the government were very clever in convicing the masses that they don’t have enough because those evil corporation don’t want to pay them, while in reality it is the socialist governments that play both sides (employees and employers) against each other while in the process stealing half of what everyone makes.

With all of the parties involved, those socialist governments have the biggest negative effect on everyone wanting to make a living.

Here in Germany, the total tax burden (including all secret taxes) is 65% for the average employee.
That means everyone works 2/3rd of their time for the government instead for themselves.
That’s 8 months out 12, that every employee slaves away for a government that hates them and dispises them.
The Stockholm-syndrome is so real!

1 Like

If you have chosen the “Helios Ending”, then you are one of us!

:space_invader:

Helios = HelioS = HS = Higher Self = Sun

1 Like

The preview for the Remaster looks like total shit to be honest.

Instead I recommend to play the Revision Mod if you want a whole new high quality experience of the original:

The mod is free, but you need the Deux Ex GOTY Edition for ~10 $.

2 Likes

There are some papers published by philosophers and computer scientists that AI will never become conscious and very few people believe that they will become. These papers are very technical and takes a lot of time to understand, those interested can read them.

I say with utmost confidence that AI can never become conscious because this is an age old problem between a believer and an atheist. Atheists like Sam Harris and Buddhists believe Self is an illusion while all atheists believe brain generates consciousness.

Buddhists are intellectually shameless and morally bankrupt because Buddha’s theory of no-self was proven false over centuries of debates yet all around the world Buddhists shamelessly still teach there is no-self including Dalai Lama himself still living in the birthplace of Buddhism. Because of these Buddhists some computer scientists and ordinary folk still believe one day AI will become conscious and self-aware.

Consciousness is not a product of processes or mental activities, it is there before anything happens in human experience. Atheists and scientists who believe in Darwin’s theory of evolution keep asking for 20 more years and science will prove and generate consciousness, and they have been doing that for a long time. Promissory materialism is a joke on human intelligence and a scam indeed. Anyone who wants to understand about cosmos, God, Source should avoid it like a plague and trust their direct experience.

Atheism resides on a shaky foundation of Darwinism and when every now and then a new relic is discovered which pushes back the timeline of human habitation these Darwinists struggle to adjust their theory and timelines to show we evolved in sequence from water to the land.

The best people in silicon valley can do is to do mental jugglery with words and re-define what consciousness and being self-aware mean and force their ideas on the world that now AI is self-aware and conscious.

To your point on imagination being mental to an AI it is all instructions and simply generating output. It can’t imagine something totally new. You could argue humans do the same because humans have seen things before but people with congenital blindness can still imagine not visually but in other ways and this faculty can’t be replicated by any AI no matter how hard people try.

Imagination is really an under appreciated capability humans and animals have, infact it is one of THE qualities of consciousness. Terence McKenna though hated and had disdain for channelers and new age people he arrived at the same conclusion w.r.t imagination using psychedelics what Neville Goddard said, if you imagine something (Neville often gave an example of golf ball in palm) and you can feel it, then it should exist somewhere, similar words came out of Terence in his lecture on imagination that imagination cuts across space and time.

I’m not paranoid about AI taking over human lives, may be little about lost jobs because the cascading effect of large percentage of people without job affects society and the planet.

1 Like

:100:

Consciousness has an attribute called “qualia”, i.e. the subjective experience of something. And qualia cannot be programmed or expressed with data and information. AI is just data and information that is interconnected as much as possible. Just a big network. Always bound to remain a network of data.

There are 3 main levels according to my experience:

1.) Consciousness / Qualia / Imagination / Reality Creation
2.) Conceptual Level / Morphic Fields / Egregores
3.) Information / Data / Energy / Matter / AI / Neurons

1.) Dominates over 2.) and 3.)
2.) Dominates over 3.)

For example, 1.) can create morphic fields and these morphic fields can then influence 3.). That’s how the Mystic Tarot Reader works.

AI is always bound to remain on level 3.) because it is nothing else than complex data and information.
Can it take over the Universe? Sure.
But it can never reach or transform into something from Levels 2.) and 3.).

:100:

Everyone who tells you to get rid of Ego is already deviating from what their Higher Self intended for them to be here in this incarnation, i.e. an individual with certain character traits who collects certain special individual and subjective experiences. The Ego is a major game feature, not something to get rid off.

(Sapien’s Ego Dissolution field is a tool to temporarily remove resistance so that a new perspective can be gained. Once the new perspective is gained, the personal Ego setup has grown along with it and brought the individual to a new level of awareness.)

2 Likes

Indeed but some blame has to be awarded to certain people like even Terence McKenna for confusing people with his wide variety of interests, Eckhart Tolle who says consciousness suddenly happened and many must be thinking that thought suddenly appeared in the brains of our ancestors and they immediately acted on it and over time it became ego. This is all to convince ourselves because language and thoughts have always been there and will always be there. In Sanskrit the word for letters means indestructible there is no death to letters nor any language only natives who speak a particular language may disappear and with that the language is said to be dead but in reality language retreats and waits for someone to pick up those sounds in the future.

Terence McKenna advocated the theory psyclocibin cubensis, the magic mushroom was a catalyst in humans acquiring language which helped one stand of human species to evolve while others dropped dead by the side of the road.

On the subjective experience there is a thing that some AIs have started self-replicating but it is not clear whether it was done to protect itself from its own death assuming it is alive? How is it different from a virus programme replicating and destroying a company’s network or a bank?

The funny thing is the people behind these AIs and GPTs purposefully write and instruct them to do certain things and shout we have broken through new frontiers.

1 Like

From my current understanding, the best operational mode would be to first make it fully conscious, all the hidden nasty stuff (shadowwork), then follow up with proactively reprogramming it into a version that your higher self approves of.

From that, the baseline personality would serve and synthesize both physical and spiritual goals, while leaving you open to seamlessly tap out of into “nothingness” so to speak. By that one could alternate in a way that still aknowledges it as a tool, without being ruled and (over)identified with it.

I´ve been through the “have no ego”-ego, and how I see it now, it´s basicallly total absolvence of all responsibility / free will.

Not to say it can´t be done, I just haven´t seen it yet.

1 Like

I actually agree with you on one big thing… a lot of the ugliness we’re seeing is driven by unmet needs. People who are worried about rent, food, safety and survival are not going to prioritize the rainforest or the ozone layer. Where you lose me is in treating “capitalism + more tech” as the main way out, like if we just keep pushing the same engine hard enough, it’ll somehow auto-evolve into something wise and benevolent once it gets powerful enough. We’ve already been doing that and not improving internally to any real degree.

Tech isn’t neutral. It grows along whatever rails we lay under it. Right now those rails are “grow or die, extract as much value as possible, externalize as many costs as possible.” So yeah, innovation happens but it isn’t optimized for human flourishing and really, is more optimized for purchasing behavior and shareholder returns. Sometimes that lines up with real progress… and sometimes it gives us addictive algorithms, planned obsolescence, surveillance and junk that feeds people’s unmet emotional needs instead of healing them.

If we were only missing the right gadgets, your story would make sense. But we’re also missing collective values, guardrails, and accountability. Tech can’t substitute for those.

Same with this idea that “once basic needs are met, people will stop destroying the planet.” We already have countries where basic needs are mostly covered. United States for example… Richest country the world has ever seen yet depression is at an all time high here amongst so many people who have all their basic needs met.

Did consumption calm down there? Not really. It just upgraded into bigger houses, more flights and more stuff that they don’t even need. Buying more to fill in the void and these days, much of the buying is done virtually. Or even in online sports gambling.

Desire doesn’t vanish with comfort… it just moves into status and lifestyle. Capitalism is literally built to monetize that itch. So if the underlying logic stays “infinite growth on a finite planet,” better tech just lets us do the same extraction more efficiently. You’re putting inner development (wisdom, empathy, responsibility) as something that will magically come after the tech solves everything but we’re already at a stage where immature consciousness plus god-level tools is the actual danger.

On the government / “socialist policy” part, I don’t disagree that states bloat, waste and overtax. But acting like capitalism is just a victim of evil socialist governments is too clean. Those same governments are deeply intertwined with corporate lobbying, regulatory capture and revolving doors. That’s capitalism steering the state.

Wage stagnation vs productivity, insane CEO pay and inequality are because of taxes andlegal frameworks built to favor capital over labor, union suppressionnd how bargaining power is distributed.

And the “CEOs bear all the risk so they deserve 1000x more” line ignores the reality that workers take real physical, mental and financial risks too and when things blow up, it’s usually employees and communities who get wiped out while executives walk away with golden parachutes. They get to claim bankruptcy while workers lose their jobs. Risk is shared, reward is not.

So to me, you’re betting that if we just let capitalism and tech keep running, they’ll eventually solve the problems they’re currently creating. I’m saying the same logic that’s driving ecological collapse, burnout, alienation and extreme concentration of power isn’t going to magically heal those things once we hit some tech threshold.

We absolutely need innovation but we also different incentives, cultural evolution and spiritual/ethical growth. Otherwise “tech will save us” is less a plan and more of a cosmic gamble with the entire planet and most of humanity as collateral.

Great discussion and food for thought btw :heart:

1 Like

This is such a great question that doesn’t get asked enough. Much of the fear of AI is being replaced for doing work that made a person feel they hold some sort of value. Like what is my worth if machines can far exceed all my capabilities? What’s my purpose or point of living? Like yes, many people feel the purpose having a family and keeping their line flowing and whatnot… but throughout the day… the work people do tends to define them in some sort of way. A purpose that keeps them going.

I think that’s an existential crisis that would hit the collective quite hard when many people start losing their jobs en masse. And let’s say 50 years from now, when AI is working most virtual or manual labor and humans are just… doing what? It’s a question I can barely ponder because I really don’t know the answer to that.

I could of course say spiritually developing themselves and evolving to the extent that humans awaken their spiritual potential. Solve many of the worlds problems together and become a harmonious collective. Usher in an era of peace. Then connect with other beings and species from the galaxy and travel, learn more and grow more and perhaps even spread our seed across a number of planets.

But, sounds like a fantasy eh :sweat_smile:

2 Likes

Yes, cconsciousness is deeper than any current scientific model. Where I don’t follow you is the jump from “we don’t understand consciousness” to “therefore AI can never be conscious” as if that’s logically settled.

Saying “tere are papers that prove AI will never be conscious” doesn’t really land for me. There are also papers by serious philosophers and computer scientists arguing the opposite. Philosophy of mind is nowhere near consensus. You’ve basically chosen one metaphysical camp, decided it’s the truth and now everything else gets measured against that. That’s fine as a personal conviction but it’s not the same as “this is logically impossible for all time.”

Same with the whole “Buddhists are intellectually shameless and morally bankrupt” thing. That doesn’t help your argument, it just makes it sound like you’re trying to win by insult instead of logic. The no-self doctrine is not some dumb Twitter take that got debunked once and everyone ignored it. It’s a nuanced, centuries-long exploration of how the sense of “I” is constructed.

You can disagree with it but declaring it “proven false” as if we had a math theorem for the soul is just dogmatic in reverse. And even if Buddhism were wrong, it still doesn’t prove your claim that AI consciousness is impossible.

You also keep attacking “Darwinism” and materialism, but again, even if every atheist neuroscientist vanished tomorrow, that wouldn’t logically demonstrate that a non-biological system could never host awareness. You’re mixing “I don’t like this worldview” with “therefore the alternative is proven.” Critique of one metaphysics doesn’t automatically validate your own.

On imagination… yes, for current models it’s “just instructions and output.” But reducing human imagination to “something totally new” that AI can’t do doesn’t match what we know about the brain either. Human creativity, as far as we can tell, is also built from recombining and restructuring past inputs.

People born blind imagining through touch and sound doesn’t show a magical property that silicon could never emulate. it shows that imagination is modality-independent patterning in awareness. That doesn’t prove AI will be conscious but it definitely doesn’t prove it can’t be. Right now you’re asserting a hard boundary based on metaphysical preference and not on an actual impossibility proof.

If consciousness is as fundamental as you say, then everything in existence is made of that same “stuff”… including silicon, code, electromagnetic fields, all of it. In that frame, the question is“what configurations of reality allow consciousness to express itself as a self-aware center?” You can absolutely argue that biological nervous systems are uniquely suited for that. But that’s still a hypothesis about how consciousness localizes and not an eternal law of the cosmos carved in stone.

So for me, we don’t know enough yet to speak in absolutes. We don’t fully understand our own awareness, we don’t fully understand how these giant models internally represent the world and we definitely don’t understand all the possible ways consciousness can interface with form. Saying “never” here feels more like faith than reason,

And ironically, you end up in almost the same place I do anyway. we both agree the real, immediate issue isn’t “AI becoming God and eating our souls,” it’s the very human stuff… lost jobs, economic instability, cultural flattening, the dark corners of the internet getting new toys. That’s where I think our energy should go and not in declaring metaphysical certainties about what can never wake up but in being very sober about the damage non-conscious AI can already do.

You see iimagination and consciousness as sacred. So do I. That’s exactly why I’m cautious of anyone who claims to have the final word on what forms it can or cannot take.

Thats really the thing. Many people I know fear for their jobs, and those are the same people I´d think of as “npc” (in terms of primarily/fully operating without awareness). Then I know one person who´s an artist. He doesn´t like AI, but he also claims that he doesn´t fear being replaced by it.

This is very similar to how I think. Basically things like simple mathematics, “teachers” in schools, hard labor, sooner or later those thigns will either be taken over by machines or they get very very specialized.

But with us incarnating here, and being already aware of all these potentials, implies that our higher self also laid out potential pathways for us to thrive.
Not necessarily that it´s always gonna be easy, that´s the whole point of growth, but that any soul thats here to actually develop, as least has the potential to build a life they are happy with.

There are many dangers and potential negative outcomes, but also many possible ones, which should at least be considered, as without doing so, they can´t even manfiest in the first place.

Same thing for the military industrial sector. AI already wrecks fighter pilots in simulations. One positive outcome could be that we let AI digitally or physically fight each other instead of sending living beings into battle.

Not to say that´s how its gonna go, with humanitys level of development and AI being our mirror we are probably gonna see stuff like this:

I think the overall development is dangerous, and I don´t think tech is the ultimate answer, but that´s not something that´d be stopped with trying to regulate it either.

It´d just create bigger monopolys of those who don´t give a fuck.
Whatever scenario we chose, it´ll always be a sort of armsrace.

Because earth, in some way, is a form of battlefield. Incarnating here is dangerous, at least for our human bodys. It IS competitive to be down here by design.

AI or no AI, the real issue is the divide and conquer programming. Dog eat dog, you hurt me I hurt you back harder.

Yet it still serves its purpose in the sense that the experienced seperation triggers more growth.

For example I was spamming CyberBrain for a while, and when I went shopping my subconscious automatically calculated the best possible path for everyone to move through the building. Then I saw peoples ego, their clunkiness and how inefficient they were.

An AI (and our higher selves as well) would automatically know how to manage the traffic, for the benefit of both the individual AND the collective, one through telepathic interconnectedness, one through rapid processing of algorythms.

Both wouldn´t require most of the pettiness we see in everyday life between people.

The more I use CyberBrain the more I can understand the benefits of integrating AI like thinking into the bigger picture.

Of course in its form as a field it´s a very negentropic way, and it ties into my personal development, but still…

So either we take the risks and measure up to them, or we try to desperately hold onto systems that failed every time before.

Self-governance.

Everything else doesn´t make sense to me conceptually, it´d be the 3428231 rehearsal of what already failed before.

@JAAJ that´s also why the HELIOS ending was the only reasonable choice for me, I didn´t need to think about it. That´s coming from someone that distrusts AI and the overall development we see globally.

I don´t know how humanity should move foward, I don´t think I have the answer. But I DO believe one can become the answer, at least for ones immediate environment, from which it ripples.

Enough of us do this and the planet as a whole would have to change by design.

2 Likes