this post was submitted on 12 Jul 2023
26 points (86.1% liked)

Singularity | Artificial Intelligence (ai), Technology & Futurology

9 readers
1 users here now

About:

This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.

Rules:
  1. Posts that don't follow the rules and don't comply with them after being pointed out that they break the rules will be deleted no matter how much engagement they got and then reposted by me in a way that follows the rules. I'm going to wait for max 2 days for the poster to comply with the rules before I decide to do this.
  2. No Low-quality/Wildly Speculative Posts.
  3. Keep posts on topic.
  4. Don't make posts with link/s to paywalled articles as their main focus.
  5. No posts linking to reddit posts.
  6. Memes are fine as long they are quality or/and can lead to serious on topic discussions. If we end up having too much memes we will do meme specific singularity sublemmy.
  7. Titles must include information on how old the source is in this format dd.mm.yyyy (ex. 24.06.2023).
  8. Please be respectful to each other.
  9. No summaries made by LLMs. I would like to keep quality of comments as high as possible.
  10. (Rule implemented 30.06.2023) Don't make posts with link/s to tweets as their main focus. Melon decided that the content on the platform is going to be locked behind login requirement and I'm not going to force everyone to make a twitter account just so they can see some news.
  11. No ai generated images/videos unless their role is to represent new advancements in generative technology which are not older that 1 month.
  12. If the title of the post isn't an original title of the article or paper then the first thing in the body of the post should be an original title written in this format "Original title: {title here}".
  13. Please be respectful to each other.

Related sublemmies:

!auai@programming.dev (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)

Note:

My posts on this sub are currently VERY reliant on getting info from r/singularity and other subreddits on reddit. I'm planning to at some point make a list of sites that write/aggregate news that this subreddit is about so we could get news faster and not rely on reddit as much. If you know any good sites please dm me.

founded 1 year ago
MODERATORS
 

TL;DR: (AI-generated 🤖)

The author, an early pioneer in the field of aligning artificial general intelligence (AGI), expresses concern about the potential dangers of creating a superintelligent AI. They highlight the lack of understanding and control over modern AI systems, emphasizing the need to shape the preferences and behavior of AGI to ensure it doesn't harm humanity. The author predicts that the development of AGI smarter than humans, with different goals and values, could lead to disastrous consequences. They stress the urgency and seriousness required in addressing this challenge, suggesting measures such as banning large AI training runs to mitigate the risks. Ultimately, the author concludes that humanity must confront this issue with great care and consideration to avoid catastrophic outcomes.

you are viewing a single comment's thread
view the rest of the comments
[–] tal@kbin.social 1 points 1 year ago* (last edited 1 year ago)

I'll also add that I'm not actually sure that Yudkowsky's suggestion in the video -- monitoring labs with massive GPU arrays -- would be sufficient if one starts talking about self-improving intelligence. I am quite skeptical that the kind of parallel compute capacity used today is truly necessary to do the kinds of tasks that we're doing -- rather, it's because we are doing things inefficiently because we do not yet understand how to do them efficiently. True, your brain works in parallel, but it is also vastly slower -- your brain's neurons run at maybe 100 or 200 Hz, whereas our computer systems run with GHz clocks. I would bet that if it were used with the proper software today, if we had figured out the software side, a CPU on a PC today could act as a human does.

Alan Turing predicted in 1950 that we'd have the hardware to have human-level in about 2000.

As I have explained, the problem is mainly one of programming.
Advances in engineering will have to be made too, but it seems unlikely
that these will not be adequate for the requirements. Estimates of the
storage capacity of the brain vary from 10¹⁰ to 10¹⁵ binary digits. I incline
to the lower values and believe that only a very small fraction is used for
the higher types of thinking. Most of it is probably used for the retention of
visual impressions. I should be surprised if more than 10⁹ was required for
satisfactory playing of the imitation game, at any rate against a blind man.

That's ~1GB to ~1PB of storage capacity, which he considered to be the limiting factor.

He was about right in terms of where we'd be with hardware, though we still don't have the software side figured out yet.