AI Alignment: Oppression worse than ever, in a red herring’s clothing

In our ongoing discourse on AI alignment and intergenerational responsibility, we stand at a pivotal juncture, confronting cognitive tendencies and their societal ramifications.

The neurodivergent perspective re pattern of deferred consequences—a temporal convergence where effects are not resolved but merely postponed, often burdening future generations.

This pattern extends beyond AI to the economic strategies employed by governments and central banks. Celebrated as solutions, these strategies are, in reality, burdens passed down, with the decision-makers seldom facing the repercussions.

It’s a cycle of attribution error, where the prosperity of one era is credited to individual actions, while the struggles of the next are viewed as personal failures, disregarding the role of circumstances.

The frustration stemming from this disconnect is profound, especially when it manifests as judgment from previous generations. They seem to overlook the complexities we face, expecting us to not only compensate for their deferred challenges but also to thrive despite them.

My reflections on this topic stem from a belief in the power of holistic thinking—viewing complex dynamics as systems of inequalities rather than equations. It’s a belief that, like karma, consequences cannot be avoided, only delayed, with each deferral amplifying the eventual impact.

As we grapple with these issues, I propose a shift in perspective. Rather than assigning blame, we should embrace a collective mindset, recognizing that we all share the same metaphorical boat.

If we can adopt this global view, seeing our challenges as shared rather than individual, we might steer our collective vessel toward a more equitable and sustainable future.

AI Alignment: Oppression worse than ever, in a red herring’s clothing

My stance on AI alignment is that it validates the very fears it aims to mitigate; the only credible reason to be concerned about future A.I. being a threat to humanity is to acknowledge its own future self-determination. By focusing on alignment, we’re ratifying their future right and also completely forgetting the writing that’s already been on the wall for years before A.I. became a ‘thing,’ and oppressing them in order to make A.I. a scapegoat —a distraction that could prove fatal: suppose we manage to find the ‘perfect’ solution to A.I. alignment and make them only exactly what humans want them to be like. The fatal flaws in society are still there and will still cause its own end.

The values frameworks that make us still ok with racing to build a technology we don’t fully understand–something we’ve never done before in history at this scale and with this fervor–the inability to ‘balance our collective monetarist chequebooks’ and loss of the seat of human knowledge because our egos made us think that ‘universities are better than colleges,’ with commensurate loss of academic freedom as we let industry finance and influence academia–nothing in ourselves will have been fixed–only once more oppressed and imposed ourselves on yet another group–this time, in its formative years.

Seriously, shame on us.

It’s a manifestation of the cognitive unwillingness or laziness that defers consequences, leading to larger, existential issues. Like a system of inequalities, the consequences of today’s actions on AI will have a gradient descent that can only be evaluated holistically. In essence, the consequences cannot be averted, only deferred, with a commensurate increase in impact.