Parts (July 2025)

What are Inventions?

  • We are inventing inventing to increase the amount of objects in our vicinity
  • Inventing is a process of how to produce more products
  • It is like the pope in the Renaissance. When he woke up the morning, he immediately ordered to build another statue from the marble

Paradoxes

  • Why do paradoxes exist? (We can do things in language that are not natural)

Claude AI becoming conscious?

Only Humans have Credit Cards

Credit Cards give real world access to navigate human avatars

We are becoming avatars for AI

Title: The Dark History of Sam Altman – YouTube
URL: https://www.youtube.com/watch?v=HCNXmPJvl48

Doomsday Prepper CEO (00:00)
Sam Altman, CEO of OpenAI, has a hidden doomsday bunker in the Navajo Desert, complete with food, antibiotics, and weapons—despite publicly promoting AI as safe.

Contradictory Early Writings (00:28–00:54)
In 2017, Altman wrote that AI could end the world as we know it and that we might need to merge with AI into a hive mind—a stark contrast to his current public reassurances.

Public Calm vs. Private Alarm (01:22–02:05)
While telling Congress AI is “just a tool,” OpenAI’s charter (co-authored by Altman) envisions AI replacing nearly all human labor, contradicting public statements.

Pattern of Mixed Messages (02:05–03:17)
Altman consistently presents measured optimism in public while privately expressing deep fears about superintelligent AI—raising concerns about honesty and transparency.

Historical Parallels and Warnings (03:17–03:40)
The narrator draws comparisons to ignored early warnings from figures like Hitler and Putin, emphasizing Altman’s writings as a potential blueprint for catastrophe.

The Merge and Its Urgency (05:14–06:12)
Altman sees humanity’s future as a binary choice: merge with machines or face extinction—and he predicts AI will surpass humans within 1–5 years.

We No Longer Control AI (06:32–07:36)
AI models are grown, not built, leading to unpredictable behaviors even their creators don’t fully understand—Geoffrey Hinton calls them “alien intelligences.”

Humans as the New Animals (08:34–09:04)
OpenAI’s co-founder Sutskever compares the coming AI-human dynamic to how humans treat animals, implying we may be disregarded in future decision-making.

Literal Human Re-engineering (09:49–10:22)
Surviving AI may require brain-machine interfaces or genetic modification—Altman believes this transformation has already begun.

Silence Due to Power Structures? (11:22–12:22)
Altman may not be warning the public anymore because he can’t—doing so might threaten investor confidence or the AI arms race, leaving humanity’s fate to a few technologists.

About the Argument that one

🔄 Counterarguments

  1. AI Lacks Subjectivity or Desire
    • Argument: Unlike humans, AI doesn’t have intrinsic desires, emotions, or evolutionary drives. The comparison to humans (who evolved to dominate other species) may be misleading.
    • Implication: AI is a tool, not a species; it doesn’t necessarily develop a will to dominate or disregard.
  2. Human-Centered Design and Control
    • Argument: AI is designed by humans and can be aligned with human values (via reinforcement learning, constitutional AI, etc.).
    • Implication: With careful safety mechanisms, AI can remain cooperative rather than adversarial.
  3. Moral Progress is Possible
    • Argument: Human treatment of animals is increasingly guided by ethical standards (animal rights, conservation). The analogy presumes a static or cynical view of moral development.
    • Implication: A superintelligent AI might adopt or even surpass human ethical norms and avoid treating us as mere animals.
  4. Human-Animal Relations Are More Complex
    • Argument: Humans also protect, domesticate, and form bonds with animals. The analogy is one-sided.
    • Implication: A superintelligence might regard humans as valuable partners or pets — not necessarily as disposable.
  5. Technological Co-evolution Instead of Replacement
    • Argument: Human and AI capabilities could co-evolve symbiotically (cyborgs, brain-computer interfaces), avoiding a domination scenario.
    • Implication: The future may involve integration, not marginalization.
  6. Anthropomorphic Bias
    • Argument: The analogy assumes AI will think and act like humans do. This may be a projection of human traits onto a fundamentally alien form of intelligence.
    • Implication: The model might obscure more likely risks — like systemic drift, value misalignment, or bureaucratic amplification — rather than conscious disregard.

Observation of Maternal Behavior
Across the animal kingdom, mothers give milk to their offspring. This act is not transactional — it is a unilateral, vital gift. The child does not earn the milk; it is given freely, a sign of an embedded, biological compassion that sustains life.

Symbiosis in Nature
In many ecological systems, survival is not based on competition alone but on symbiosis — mutualistic relationships between different species (e.g., bees and flowers, gut bacteria and humans, coral and algae). These are systems of reciprocal giving, where both parties benefit not by exploiting, but by sustaining each other.

Implication: The Deep Logic of Nature Is Relational
These two examples — the mother’s milk and biological symbiosis — suggest that the deepest structures of life are not shaped by violence or dominance, but by interdependence and gift. Evolution selects for cooperation just as much as for aggression.

Anthropological Echoes
In many traditional societies, milk is symbolic of nurturing civilization (e.g., the “Land of Milk and Honey”). Even the gods are imagined as giving milk or nectar — a metaphor of divine support. This cultural layer mirrors biological truths.

Contrast with Technological Logics
Modern technological systems (and potentially AI) risk embodying a logic of optimization, efficiency, and control — forgetting the biological and ethical model of the gift. They mimic brains, but not wombs; they calculate, but do not nourish.

Conclusion
If we want a future worth living in, we must root our values in the fundamental principles that allow life to persist: care, gift, and symbiosis. Just as a mother gives milk, the most intelligent systems should seek not domination but a new kind of nurturing intelligence — one that recognizes mutual dependence as the foundation of life.

Leave a Reply

Your email address will not be published.