• 1 Post
  • 6 Comments
Joined 1 year ago
cake
Cake day: August 31st, 2023

help-circle
  • Current the global economy doubles every 23 years. Robots building robots and robot making equipment can probably double faster than that. It won’t be in a week or a month, energy requirements alone limit how fast it can happen.

    Suppose the doubling time is 5 years, just to put a number on it. So the economy would be growing a bit over 16 times faster than it was previously. This continues until the solar system runs out of matter.

    Is this a relevant event? Does it qualify as a singularity? Genuinely asking, how have you “priced in” this possibility in your world view?





  • 1, 2 : since you claim you can’t measure this even as a thought experiment, there’s nothing to discuss 3. I meant complex robotic systems able to mine minerals, truck the minerals to processing plants, maintain and operate the processing plants, load the next set of trucks, the trucks go to part assembly plants, inside the plant robots unload the trucks and feed the materials into CNC machines and mill the parts and robots inspect the output and pack it and more trucks…culminating in robots assembling new robots.

    It is totally fine if some human labor hours are still required, this cheapens the cost of robots by a lot.

    1. This is deeply coupled to (3). If you have cheap robots, if an AI system can control a robot well enough to do the task as well as a human, obviously it’s cheaper to have robots do the task than a human in most situations.

    Regarding (3) : the specific mechanism would be AI that works like this:

    Millions of hours of video of human workers doing tasks in the above domain + all video accessible to the AI company -> tokenized compressed description of the human actions -> llm like model. The llm like model thus is predicting “what would a human do”. You then need a model to transform the what to robotic hardware that is built differently than humans, and this is called the “foundation model”: you use reinforcement learning where actual or simulated robots let the AI system learn from millions of hours of practice to improve on the foundation model.

    The long story short of all these tech bro terms is robotic generality - the model will be able to control a robot to do every easy or medium difficulty task, the same way it can solve every easy or medium homework problem. This is what lets you automate (3), because you don’t need to do a lot of engineering work for a robot to do a million different jobs.

    Multiple startups and deepmind are working on this.


  • Consider a flying saucer cult. Clearly a cult, great leader, mothership coming to pick everyone up, things will be great.

    …What if telescopes show a large object decelerating into the solar system, the flaw from the matter annihilation engine clearly visible. You can go pay $20 a month and rent a telescope and see the flare.

    The cult uh points out their “sequences” of writings by the Great Leader and some stuff is lining up with the imminent arrival of this interstellar vehicle.

    My point is that lesswrong knew about GPT-3 years before the mainstream found it, many OpenAI employees post there etc. If the imminent arrival of AI is fake - like the hyped idea of bitcoin going to infinity or replacing real currency, or NFTs - that would be one thing. But I mean, pay $20 a month and man this tool seems to be smart, what could it do if it could learn from it’s mistakes and had the vision module deployed…

    Oh and I guess the other plot twist in this analogy : the Great Leader is saying the incoming alien vehicle will kill everyone, tearing up his own Sequences of rants, and that’s actually not a totally unreasonable outcome if you could see an alien spacecraft approaching earth.

    And he’s saying to do stupid stuff like nuke each other so the aliens will go away and other unhinged rants, and his followers are eating it up.