Talk:Singularity/Archive1

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 22 November 2022. Please do not make edits to this page.
Archives for this talk page:  , (new)(back)

Dragon's Egg[edit]

Liked Dragon's egg - nice hard SF. Susanspeak your mind 03:06, 24 October 2007 (EDT)

Have you read Quarantine? one of the weirder and more wonderful takes on the singularity idea. --Jeeves 03:17, 24 October 2007 (EDT)
No never have - on your recommendation though I will. Never read any Greg Eagan. Off to the big city book shop, hope I can find it & you're right;-) Susanspeak your mind
No Quarantine - got Schild's ladder £7.99 ($15?) it better be god ;-) (see alsoUser_Talk:AmesG Susanspeak your mind 08:35, 24 October 2007 (EDT)

Three Major Singularity Schools[edit]

There are at least three very different definitions of "technological singularity" in common use. This article currently treats them all as the same concept. See http://yudkowsky.net/singularity/schools for definitions of the three different schools of thought --PeerInfinity (talk) 22:33, 23 November 2010 (UTC)

I like his "fourth" definition. That's pretty much what it's like. Scarlet A.pngsshole 09:10, 24 November 2010 (UTC)
I've paraphrased Yudkowsky's article into a section here. I'm not sure that then contradicts the previous section which talks about it in more general terms. It might annoy EY to combine them, but those ideas aren't distinct enough to treat entirely separately. Scarlet A.pngsshole 09:28, 24 November 2010 (UTC)

Blue Brain Project[edit]

Personally, I feel that the technological singularity will be reached by the end of this decade. I'm not going to use the argument of Moore's Law. It's true that anyone who believes in perpetually accelerating growth in a world with finite resources is either a madman or an economist. It's also one of my main grudges against economic growth as we know it, but that's a topic for another day. The wp:Blue Brain Project is an attempt to reverse-engineer the neocortex (or possibly the whole brain) and build an electronic version of it. At a TED conference in 2009, it was stated that the human brain could be built in 10 years. What took nature ~4 billion years will take humans less than twenty. Once we can create these robots as smart as humans, with the same neural construction with the main difference that one uses sodium ions and the other uses electrons in its wiring, it is hardly surprising that these robots will seek to "improve" themselves, which will increase their intellect more so. The world in 2022 is going to be very different than 2012. The Heidelberg Kid (talk) 01:21, 29 January 2012 (UTC)

Yeah, I wouldn't hold my breath on that one. Or maybe I would just to keep the fumes from the Kurzweil Kool-Aid from seeping into my own brain. Nebuchadnezzar (talk) 01:40, 29 January 2012 (UTC)
But nanobots! Scarlet A.pngpathetic 01:59, 29 January 2012 (UTC)
Lol. ħumanUser talk:Human 02:03, 29 January 2012 (UTC)
And if that fails, "But Moore's Law!" Nebuchadnezzar (talk) 02:22, 29 January 2012 (UTC)
The AI guys down the hall once mention a cat brain being done, using an ungodly amount of RAM on the order of 100TB or so. Human brain's along way off. TyAnnoy 02:34, 29 January 2012 (UTC)
Even that didn't simulate individual neurons and synapses. Nebuchadnezzar (talk) 02:37, 29 January 2012 (UTC)
Now, why not? They already made great progress from 2005, when the first individual artificial neuron was made. They'll be able to make the artificial rat brain by 2014, and the artificial human brain by 2023. Now, why wouldn't this be possible? Man has been able to replicate nature quite well before, as you probably know from bioengineering. The Heidelberg Kid (talk) 02:41, 29 January 2012 (UTC)
Bioengineering != CS. TyAnnoy 02:43, 29 January 2012 (UTC)
We are only just beginning to figure out how the human brain functions, let alone simulate the damn thing. AI is notorious for under-delivering on its promises. Nebuchadnezzar (talk) 02:47, 29 January 2012 (UTC)
AI techies are notorious for over estimating what they actually know. ;-) Pink mowse.pngGodotGrow a vagina 03:40, 29 January 2012 (UTC)
Of course. But that's just the usual future projection fallacy or whatever it's actually called. The only way to say, with any degree of reasonable confidence, that we'll "have technology X by 2020" is to have the ability to do it now. You simply can't predict the future that well unless you get a million people to make different predictions and laud the one that happens to get it right as a visionary. Scarlet A.pngmoral 10:43, 29 January 2012 (UTC)
Sorry, wrong word. I meant "biotechnology". The Heidelberg Kid (talk) 02:51, 29 January 2012 (UTC)
The non-scientist in me is dubious that we will *ever* hit this point, but if we do, it won't be soon. Technology takes too much time to develop. WE don't even know how we think - so any AI will either be limited or just different. But again, i just read about this today, and am trolling the site for things to talk about -- so mostly "what do I knwo".Pink mowse.pngGodotGrow a vagina 03:36, 29 January 2012 (UTC)
Hkid - i'm curious. how can they make these "artificial brains" when we don't even know how the fucking brain works. we are just now getting glimpses of what makes sensations, thoughts are beyond our comprehension right now. We would have to actually understand what the folds are for, what the ratio of chemical to goo is, and why they are that way. We might be able to GROW an already "designed" brain, but we are a long way off from "making" a brain from scratch.Pink mowse.pngGodotGrow a vagina 03:39, 29 January 2012 (UTC)
I think we're more likely to achieve biological immortality before the singularity, and while I certainly think the former will be reached in my lifetime, I can only hope for the latter. Godot's right. Since consciousness is an emergent property, simply being able to imitate the components of the brain will do nothing for our ability to actually create a functioning mind. Blue (pester) 04:11, 29 January 2012 (UTC)
I just want to be a tech priest really. Without the stupid Omnissiah machine spirit BS. So that would make me more of a heretek really. TyAnnoy 04:15, 29 January 2012 (UTC)
A cyborg? Blue (is useful) 04:18, 29 January 2012 (UTC)
Yep. More of the Dread Magos Robertson than anything. TyAnnoy 04:22, 29 January 2012 (UTC)
Has Blue been sniffing the glue in the bindings of those transhumanist books all this time? Nebuchadnezzar (talk) 04:19, 29 January 2012 (UTC)
I've never heard of transhumanism... though maybe Iain M. Banks' books would count. Blue (pester) 04:26, 29 January 2012 (UTC)

────────────────────────────────────────────────────────────────────────────────────────────────────We haz snarticle. Nebuchadnezzar (talk) 04:27, 29 January 2012 (UTC)

Hmmm, I suppose I'd subscribe to some kind of transhumanism, but more along the lines of funky induced genetics and mechanical implants type things. Blue (pester) 04:30, 29 January 2012 (UTC)
Similar here. Singularity is pretty much BS though. TyAnnoy 04:31, 29 January 2012 (UTC)
I'll go out on a (titanium) limb here and say that that'll happen long before AI is ever perfected. Peter Monomorium antarcticum 04:34, 29 January 2012 (UTC)
Meh. Read the link to One-Half a Manifesto on this page. It's far more sane and level-headed than much of what comes out of the transhumanist/Singularitarian strain of thought. Nebuchadnezzar (talk) 04:48, 29 January 2012 (UTC)
I stopped reading when he argued "Someone will eventually prove X. Therefore X." (I'm not making that up. Search for "new Kant" for his actual wording.) Player 03 (talk) 04:01, 10 April 2015 (UTC)

Regarding the whole "we don't know how the brain works, how the hell are we supposed to re-make it" thing, I'd like to say a lot of things (e.g., many medicines) are made by humans and work without very much understanding exactly how they work.
Regarding the fact that consciousness is emergent, I can easily write a simple program in pseudocode to test this:

Memory[edit]

  • Definition:conscious = "aware of what is going on around you"

Input[edit]

  • "Are you conscious?" }= I

Processing and Output[edit]

  • IF I = "Are you conscious", proceed to Algorithm X

Algorithm X[edit]

  • IF Self is Definition:conscious, reply YES }= A
  • IF ~A, reply NO

Basically, we include the definition of being conscious in the AI's memory, ask the AI if it is conscious, and program it so it will reply "yes" iff (if and only if) it is conscious, and reply "no" otherwise. Kind of like what would happen if you were to ask a human "are you conscious?" If we get a "yes", the person's conscious, unless we want to go into the wild world of solipsism. The Heidelberg Kid (talk) 17:22, 30 January 2012 (UTC)

That's all well and good but for two things. The first is the obvious one -- how do you write a test for self-awareness to begin with? Second, your argument is essentially assuming, if not a miracle, then at least a major strike of dumb luck. It could happen, but it's not guaranteed to, so it's not likely to be worth considering. (Don't forget, drug discovery is a rather scattershot business, and as a general rule, has a lot more people working on it at any given time than AI work does.) Under those circumstances, any attempt to make a prediction about strong AI (to say nothing of the Singularity) is at best meaningless, at worst delusional. EVDebs (talk) 19:31, 30 January 2012 (UTC)

Arguments?[edit]

The article is basically pure snarl. What I'm missing is a reason why. For example:

This posits that the singularity is driven by a feedback cycle between intelligence enhancing technology and intelligence itself. As Yudkowsky (who endorses this view) "What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces." When this feedback loop of technology and intelligence begins to increase rapidly, the singularity is upon us.

This seems pretty solid. Assuming people use technology to enhance their intelligence, that enhanced intelligence will be better at improving its intelligence. So there's one plausible scenario. Strong AI is another, resulting in a similar feedback cycle, if such an AI is able to program another AI.

"Critics, however, believe that humans will never be able to invent a machine that will match human intelligence, let alone exceed it." <- This is the only argument given against a singularity scenario in general (as opposed to short time frames), which however sounds like a "Souls are magic"-argument to me. IF we are at the bottom physical systems and consciousness/intelligence is a result of physical processes (as I think most, if not all, of us are pretty sure of), how could anyone argue that we will never be able to match our own intelligence with technology, given that we KNOW that physical systems are capable of intelligent behaviour?

Of course people like Kurzweil are hard to take seriously with their narrow time estimates; but WHEN the singularity comes, is a different question than IF it comes, or if it is plausible that it comes.

Point is: Given that this is rational wiki it's pretty absurd to see that this article consists mostly of bashing of the idea without any attempt at arguing against it. The idea is at the very least not OBVIOUSLY wrong, so I'd suggest either giving (at least ANY, at best SOLID) arguments why the idea is bullshit or taking on a more neutral stance. — Unsigned, by: 178.7.248.190 / talk / contribs

I know I'm just another anon editor, but really I came to this talk page to try and express thoughts pretty similar to the above. There's almost nothing but ridicule and argument from absurdity. I come to the rationalwiki looking for arguments, or at least links to sources which carefully address the issues. Congratulatory ad hominem bashing of ideas without using arguments of substance does not help me find information which may change my mind. I encourage RW editors who think the main idea of singularity is poor to rewrite this article and focus on real issues with the arguments presented for it, not laughing of those who don't agree with the consensus. You'll convince more that way if you're right, and maybe you'll find things are less clear-cut in mid-long term future predictions.--171.96.240.105 (talk) 18:51, 8 February 2014 (UTC)
The basic issue (as the article says) is that there's no actual reason to think that such a feedback loop will continue forever. Real-world design and development doesn't proceed along linear curves associated with the number of researchers or their intelligence (even if intelligence was quantifiable, which it is not.) Therefore, it's possible to hit a point where the best AI or brain-computer interface is incapable of further rapid, dramatic improvements to its capabilities -- further directions for research might exist in that situation, but could, for instance, require years, decades, or centuries of practical research and refinement of industrial techniques that are outside the capability of the AI to simulate. It's entirely possible for computer science to hit a dead end or a plateau -- for our best computers to hit a point where we can't really improve them using our current understanding of physics, and where the best possible AIs those computers can run (after those AIs have improved themselves to their absolute limits) are unable to get our understanding of physics to the necessary level except in a very long timeframe, possibly because, say, further physics research in the proper directions requires materials or natural phenomena we cannot easily obtain. If the research necessary to produce the next-gen computers that can run next-gen AIs requires studying a type of quasar of which the closest example is a thousand light-years away up-close, whelp, your successively-tighter singularity-loops just hit at least a thousand-year delay (I guess two thousand, if you want to get the information back here!)
Real-world technological advancement isn't like playing Civilization -- often it takes broad changes to the infrastructure of your entire society to even begin to investigate possible improvements on current technology; it's not necessarily enough to have a billion supergeniuses in a box, no matter how smart. Which isn't to say that actual human-like or transhuman AIs won't cause dramatic changes to society and be able to improve research in all sorts of ways, of course, but the idea that the degree of iterative self-improvements necessary for the singularity are possible is purely speculative. It's possible that our first AIs will improve themselves a bit, then halt and require entirely new physical forms of computing in order to go any further, which they can't get without practical research rather than just simulations and thinking.
More than that, though, I think the real focus of this article is about the meaning people attach to the Singularity -- the comparison to the Rapture, I think, is particularly apt. If you challenge someone who considers themselves a rationalist about their predictions for the future of contact with alien life, or the future of economics, or the future of cars or transportation or the internet, they'll generally take it in stride; but in my experience, there are a lot of people committed to the idea of the Singularity to the extent where challenging it is taken as a personal insult. And I think that's because, as the article implies, to many of them it is their way of envisioning a future where they can avoid death, or a way of picturing a dramatic, world-shaking, and most of all imminent future that will hit within their lifetime and render all their mistakes and flaws and problems up until now moot. None of which is to say that the Singularity is impossible (the article, while snarky, doesn't go that far) -- it doesn't violate any physical laws, and the only really dramatic leap of faith it requires is perhaps a somewhat questionable handwave about the nature of research and scientific progress -- but at the same time, at the moment it's important to recognize that everything about the Singularity amounts to purely speculative musings without any real backing to them beyond "it'd totally be cool if we could develop an AI that could exponentially improve itself forever, right?" --Aquillion (talk) 05:29, 17 August 2014 (UTC)
"Real-world design and development doesn't proceed along linear curves associated with the number of researchers or their intelligence." Indeed, it's often other factors that determine the amount of time taken. But then again, it's not like smart people are all that distant from stupid people, so perhaps you wouldn't expect to see a huge difference. If you really wanted to see a difference, use non-humans as your designers. The other great apes are pretty close to us, evolutionarily speaking, but none of them would be a productive researcher.
I know we don't have a way to measure actual intelligence, but no matter how you look at it, humans are reliably smarter than any other animal.
I guess what I'm saying is, the whole "endless exponential self-improvement" thing isn't even necessary. If a computer got as far ahead of us as we are ahead of chimpanzees, that would be enough to take over the world. Player 03 (talk) 02:35, 8 April 2015 (UTC)

Arguing backwards[edit]

The concept is not necessarily wrong, but unlikely to be more than partially right.

Go back 50 years, and much of what is now arould could be understood ('technology advances but emails ... something like a personal telegram service in your home ... hot Venus - weird, but if you say so... websites... something like amateur radio, but involving text and pictures...' etc)

150 years ago and some of present technology would be understood ('a car is like a steam engine, but uses petrol and an exhaust pipe rather than coal and a funnel...) but 'transmission at a distance' would be difficult to explain.

Go back 500 years and much would be inexplicable.

'Most people' would accept that the same will apply for future equivalents (but what is invented is unlikely to be more hostile than 'contrary comptuers' and having a slow puncture on Mars). 82.44.143.26 (talk) 14:35, 2 May 2014 (UTC)

Black Swan[edit]

One of the main problems I have with the Singularity is that such an event is, by nature, extremely rare and unpredictable. I'll be honest here, I'm not willing to rule it out completely, but I think the pseudoreligious overtones some proponents give it are kind of stupid. Optimusjamie (talk) 10:37, 1 December 2014 (UTC)

"More intelligent"[edit]

From the article: "it isn't clear whether you can make something which is directly 'more intelligent' than a human." But hold on, RationalWiki has a whole list of things humans reliably do wrong. Assuming you can make a human-level AI, it seems clear to me that you can make something that's "directly more intelligent." Just fix the biases! Player 03 (talk) 10:04, 6 July 2015 (UTC)

How sure can we be that fixing those biases will not introduce others, at least as serious? Despite those cognitive biases (frequently) leading to wrong results, they might have a strategic advantage with respect to survival in the real world. So how could this hypothetical thing be considered 'more intelligent' than a human if it is less well-equipped for survival (modulo how what 'survival' means differs between the two)? Note that for quite a few complicated tasks logically-designed algorithms are out-performed by algorithms created by brute-force empirical testing. Additionally, we currently do not know if the creation of a 'human-level AI' is even possible without introducing a plethora of biases. Queexchthonic murmurings 10:15, 6 July 2015 (UTC)
I omitted part of the sentence for brevity, but here it is in context: "Indeed, how much smarter it is possible for something to even be than a human being is an open question; it isn't clear whether you can make something which is directly 'more intelligent' than a human." We aren't discussing feasibility here; we're discussing physical limits.
I don't think evolution could have produced anything near the highest possible intelligence. (No matter which definitionWikipedia you prefer.) Player 03 (talk) 06:41, 8 July 2015 (UTC)
But the point still stands - it does not follow that creating intelligences without those biases would be more intelligent (for any non-degenerate definition thereof) than humans. You can make intelligences and see how they stack up, but it's not a given.
Evolution, given enough time, would inevitably produce a highest possible intelligence. Even if it does it indirectly by first creating a lesser intelligence that 'designs' a better one. If something can't be produced by evolution, even indirectly, then it's not possible for it to be produced at all in this universe. Ergo, evolution will produce the highest possible intelligence at some point. Probably just before the universe becomes incapable of sustaining complex life, but there you go. Queexchthonic murmurings 10:46, 8 July 2015 (UTC)
If something can't be produced by evolution, even indirectly, then it's not possible for it to be produced at all in this universe. Are you quite sure on this one? - David Gerard (talk) 12:41, 8 July 2015 (UTC)
Sorry, they're right. Stars don't exist. ikanreed You probably didn't deserve that 13:25, 8 July 2015 (UTC)
I'm kinda doubtful you can count evolved lifeforms artificially creating new intelligent entities as evolution. Also, evolution doesn't try to get the 'highest' of anything. It's actually not about survival of the fittest, but about survival of the sufficiently fit. In cases of competition, that usually merely means survival of the slightly fitter. And there's no insurance that the slightly fitter will always include intelligence as a trait and work up from there. Even if that were the case, there's huge exctinction events that would regularly reset any progress made towards an 'ultimate' intelligence. 141.134.75.236 (talk) 14:36, 8 July 2015 (UTC)
I don't think the point stands. Some biases make us less intelligent, so fixing those biases would make us more so. For instance, congruence bias makes us worse at testing hypotheses, selective perception causes us to miss important details, and practically every single probability-related bias makes us worse at making decisions. Player 03 (talk) 01:06, 9 July 2015 (UTC)
Sorry, I was using 'evolution' as a stand-in for 'natural processes', that was sloppy. But the point still stands - anything that natural processes can't produce, is not possible. The distinction between natural and artificial is not very useful in general. Everything artificial owes its existence to something natural. Don't like the idea that a computer is 'natural'? What about a bird's nest? A worm cast? Dung? All of those, including the computer, are the produced by a 'natural' creature. What about tidelines? A natural mechanical process, but the presence of living creatures makes a difference to how it works. I don't think the natural/artificial distinction stands up to scrutiny.
It also doesn't matter whether evolution, or any other mechanistic process, has a 'goal' or not. For any metric bounded above with a bounded domain, there is a (not necessarily unique) maximum value. So in the space of all intelligences that natural processes have and will produced, and for any non-degenerate intelligence metric, there will exist a naturally-produced 'highest' intelligence. Evolution, biological or otherwise, is likely to be the driving force behind it, in any case.
On the subject of eliminating biases - let's say you develop a suite of tests designed to measure intelligence, as best you can. Would a hypothetical intelligence without congruence bias perform better than one with? It's by no means certain. For 1 - it's possible that such a bias can be advantageous in results-driven problem-solving. As a possible mechanism for this, it might hasten reaching a 'good enough' result by reducing the search space. It prevents you from reaching the optimum, but you're faster at finding a result that 'will do'. For 2 - An intelligence that works without that bias might only be possible if it has a different (as-yet unnamed) bias elsewhere in its approach. In which case, it might well be a toss-up which version would do better even if you assume that less bias = better.
If you cheat by making a biased set of tests that only considers those biases you're trying to eliminate, all you've done is prove that water is wet. Queexchthonic murmurings 12:08, 9 July 2015 (UTC)
Off-topic: you say "anything that natural processes can't produce, is not possible," yet you define "natural processes" to mean "anything that exists."
On-topic: I'm not sure why you think testing is the right way to measure intelligence, when you previously pointed out that intelligence is about real-world performance. Whatever, I'll go with it.
Yes, I would expect someone without congruence bias to score better, on average. Any decent intelligence test should - among other things - test if you can figure out a confusing phenomenon. Congruence bias makes you worse at the scientific method, therefore for any reasonable definition of intelligence, it makes you slightly less intelligent.
Also, "it's possible that such a bias can be advantageous"? Really?Wikipedia Congruence bias doesn't save you time, it makes you waste time doing similar tests over and over. You can test more things in the same amount of time if you design your tests efficiently, but you can't do that if congruence bias prevents you from thinking of certain possible tests. Player 03 (talk) 03:28, 10 July 2015 (UTC)
Of course it's a tautology - you were the one who specified 'physical limits'. It doesn't matter how majestic a particular theoretical intelligence is, if we can't reach it from here then it's off the table. It's really more an application of the anthropic principle.
"Yes, I would expect someone without congruence bias to score better, on average." That's a very large assumption made on little evidence. Why is 'better at the scientific method' a better gauge of intelligence than 'better at not being eaten by predators'? If the metric you use only considers pure logic, then you are baking your conclusion into your test in exactly the 'water is wet' way I warned against.
"Also, "it's possible that such a bias can be advantageous"? Really?Wikipedia" Yes, really. What if an attitude skewed towards direct tests helps develop intelligence more quickly, develops a better intuitive grasp while the intelligence is still in the early stages of learning? Take two intelligences, one designed to allow congruence bias and one designed to never allow it. How sure are you that the first won't learn more quickly, when in the earliest stages it's constrained to more intuitive experiments? Will the second ever be able to 'catch up' if it turns out to be delayed at that early stage? If this example doesn't convince you, consider negativity bias: the potential practical benefits of that bias are very obvious. We simply don't know enough about how we developed these biases, and what makes them persist, to make the large assumption that removing them is an automatic plus. It seems likely that it would be a plus, but it can't be assumed to be certain. Biases are a disadvantage in science and logic, but any measure of intelligence that considers only those facets and not the ability to survive and live as well is a joke metric. Queexchthonic murmurings 11:13, 10 July 2015 (UTC)
"Why is 'better at the scientific method' a better gauge of intelligence than 'better at not being eaten by predators'?" Society protects us from predators, so there's no benefit to being able to avoid them. On the other hand, understanding science gives people an advantage.
For instance: programming can be a lucrative job. An important part of programming is debugging. Debugging is, in many cases, very similar to the scientific method. Therefore, being good at science can make you more successful in modern society.
"It doesn't matter how majestic a particular theoretical intelligence is, if we can't reach it from here then it's off the table." True, but as far as I can tell, this part of the article is talking about physical limits, not human limits. Again, here's the quote: "how much smarter it is possible for something to even be than a human being (emphasis added)." If the article was worded more like your statement, I wouldn't have started this thread in the first place. Player 03 (talk) 23:42, 12 July 2015 (UTC)
The point you're missing is that you're still assuming a whole load of things about how to measure 'intelligence' - in particular rooting it in present day matters which is foolish to assume reflects a timeless standard. That programmer - he's a good programmer and successful now. How would he fare 2000 years ago? His skillset is useless and his much vaunted logical mind might not be as advantageous as, say, a talent for cold-reading. How would he fare 2000 years into the future? His computer language skills would be long obsolete, and even his basic knowledge of computer science could be useless as programs are written by programs by that point. Even a personal knowledge of the scientific method might not be as useful as being able to interpret and apply scientific conclusions. If you try to construct and sort of measure for intelligence that's not unduly biased towards a particular milieu, you end up with generalised 'fast to learn, fast to adapt, good at spotting patterns' sort of thing. The removal of specific cognitive biases is not the silver bullet for that sort of metric that you seem to think it is. Queexchthonic murmurings

Evolution argument[edit]

The thread above is getting bogged down in the details, so here's another, hopefully clearer, way to approach this.

1) Human intelligence evolved.
2) Evolution rarely, if ever, produces the best possible version of anything.
3) Therefore, human intelligence is almost certainly not the theoretical limit of intelligence.

Player 03 (talk) 00:47, 13 July 2015 (UTC)

":2) Evolution rarely, if ever, produces the best possible version of anything."

1) Everything in the universe is the result of natural processes (including evolution), or processes created by natural processes, etc.
2) Everything that ever will be in the universe will also be a product of those processes.
3) Therefore the maximal intelligence the universe will ever produce will be a product of those processes.

I would argue that given sufficient time: - The natural processes that gave rise to evolution, or evolution, or processes that evolution sets in motion, are guaranteed to produce the best of something given sufficient time. - 'The life of the universe is not sufficient time for those processes to produce the best possible version of something' is a nonsense phrase. If there is not enough time for the universe to produce it, then it was not possible to produce.

1) There are numerous ways of measuring intelligence, and no perfect single measure of intelligence.
2) There are different sub-types of intelligence that cannot be measured by the same test.
3) Any composite test therefore contains embedded assumptions about the relative 'worth' of those sub-types.
4) No purely objective weighting of those sub-types exists.
5) When comparing two intelligences, unless one is strictly better than the other in all such metrics, it is not possible to designate one as more intelligent.
6) Therefore, we cannot assume that a single maximal intelligence exists (or ever will exist); there might be a set of intelligences that we cannot separate out without resorting to subjective weighting. Given the sheer number of ways you could try to measure intelligence, a sensible prior assumption is to assume that the set will be very large.
7) The concept of 'higher intelligence' loses any meaning once you enter that set.
8) Humanity might already be in that set for all we know. Even if humanity isn't, there's no good reason to exclude any evolved intelligence with what we know now.

To expand on point 6- imagine there were only two tests and 3 intelligences. The scores were (10,4), (7,7), (4,10). Without choosing some way to weight the two tests comparison is impossible, let alone selecting a winner. A fourth intelligence scoring (4,11) would knock the third out of the running, but even then, we don't have any way of saying that (4,11) is significantly more intelligent than (4,10) or if they are, in practice, so close that the distinction is meaningless.

You can swap out 'intelligent' for any other attribute and it holds. The point is that any non-physical measure is, primarily, a reflection of the one doing the measuring, not the thing being measured. Queexchthonic murmurings 12:19, 15 July 2015 (UTC)

The processes of evolution are by no means guaranteed to produce a globally maximized solution to a problem. Particularly when local maxima exist. ikanreed You probably didn't deserve that 12:59, 15 July 2015 (UTC)
Very much agreed. Nerves are inherently limited; the real speed limit is about three million times the speed of nerve impulses. I mean, color me impressed at what evolution accomplished with neurons, but imagine how much more it could have accomplished with electric circuitry! Problem is, evolution isn't smart enough to make the jump to new materials; in other words, it's caught in a "local maximum." (Ok, it hasn't reached the maximum yet, but it's getting there.) Player 03 (talk) 23:09, 15 July 2015 (UTC)
"The processes of evolution are by no means guaranteed to produce a globally maximized solution to a problem." - Look up MCMC simulation. The concerns are that the process can get stuck in a local maximum for the duration of the simulation. It's possible for the simulation to jump out of it (unless it's a very simple, naive updating algorithm), but might not be probable. When you have the entire time-scale of the physical universe to work with, then that supposedly local maximum is, in fact, the actual global maximum achieved. In another universe, it might have found a different maximum, but that's a theoretical consideration, not a practical one. Queexchthonic murmurings 10:36, 16 July 2015 (UTC)
"If there is not enough time for the universe to produce it, then it was not possible to produce." That's the biggest non sequitur I've read all week. This company can create any 8kb sequence of DNA, "even complex, GC-rich, repeated, or long genes, with 100% sequence accuracy guaranteed." Even rounding down, that's 10^2408 possibilities! There's nowhere near enough time until the expected heat death of the universe to produce all of those, yet they are all possible.
Besides, my conclusion wasn't "evolution cannot produce an optimal intelligence in the lifespan of the universe" (even though I suspect that's true), my conclusion was "evolution has not yet produced an optimal intelligence."
Also, I'm talking about the capabilities of evolution in particular, not the capabilities of evolution plus other processes. Player 03 (talk) 22:59, 15 July 2015 (UTC)
I maintain that we have no evidence that evolution hasn't, as (somewhere in the universe) we might have already reached that terminal set of intelligences where it's no longer justifiable to rank them. Actually, I started off pointing out that the assumption that removing biases automatically results in a 'better' intelligence was not justified, so we've got way off-topic here. It's still true - we don't know how those biases are interconnected with other aspects of intelligence and you can make a case for situational benefit for any of them. Queexchthonic murmurings 10:36, 16 July 2015 (UTC)
First it was "natural processes" rather than evolution. Next it was the distant future. Then I clarified that I was talking about the past, and you jumped to "somewhere in the universe." Because you seem to grasp at anything you can to avoid addressing modern human intelligence, here are some more definitions.
- When I say "human," I am referring to homo sapiens sapiens. I am not talking about hypothetical lizard people who might be ruling the world, nor am I talking about Cthulhu.
- When I say "Planet Earth," I am not talking about planets containing soil, I am specifically talking about the planet named Earth that is located here.
- When I say "modern-day," I am referring to 2015 A.D.Wikipedia (plus or minus several decades if necessary).
- When I say "evolution," I'm talking about the scientific Theory of Evolution by natural selection, as it is understood in 2015 A.D., because evolution - as described by this Theory - is widely held to be responsible for the structure and capabilities of the modern human brain. When I say "evolution," I am not talking about "natural" processes in general.
- When I say "intelligence," I am talking about the ability to solve problems in the context of an affluent society in 2015 A.D. on Planet Earth. I specify affluent societies in 2015 A.D. because that is the closest reference we have to the context in which general AIWikipedia would be developed. The wider variety of problems one can solve in this context, and the faster one can solve them, the greater one's intelligence (according to this definition). In this definition, a "problem" is any discrepancy between reality and the way the intelligent agent wishes reality was, and a problem is "solved" when reality matches the way said agent wishes reality was. (This definition also requires an agent to have desires, preferences, and/or goals.)
- When I say "possible," I am referring to any logical possibilityWikipedia, given the laws of reality as accredited human scientists understand them today. In other words, any state of affairs that does not contradict the currently widely-accepted laws of physics, chemistry, biology, etc. is considered "possible." See also: nomological possibilityWikipedia.
With those definitions in mind, I am claiming that it would be possible for an entity to have a greater intelligence than modern-day humans. As evidence, I cite the fact that modern-day humans are a product of evolution by natural selection, and natural selection tends not to produce perfect designs, especially not in under 13 million yearsWikipedia. Player 03 (talk) 20:29, 16 July 2015 (UTC)
So why this focus on modern-day Planet Earth? Why not the ancestral environment? Well, the reason is, this article is about the Singularity, a hypothetical future event in which a superhuman AI takes over the world. This AI would be built in a wealthy nation, using slightly-better-than-modern technology, and it would hypothetically take over a world that wasn't too different than the one we know today.
I will be happy to discuss whether or not humans would be capable of building this thing, but not now. I'll make another thread for that. The only thing I'm claiming in this thread is that the AI could exist, not that we could build it. Player 03 (talk) 22:26, 16 July 2015 (UTC)

Ronald Reagan[edit]

said 'we are not at war with the fishes.'

The singularity would bear a similar relationship to us as we do to the fishes.

Therefore it would not be at war with us.

Alternatively: just as humans can theoretically program the singularity into existence, they can theoretically include the two lines of code '10 work cooperatively with humans', '10(to large number) go to 9.' Problem solved. 82.44.143.26 (talk) 15:00, 22 August 2016 (UTC)

The plug[edit]

... in your bath is the actual singularity (gurgle, gurgle). 31.51.114.49 (talk) 21:46, 18 April 2017 (UTC)

Reasons to be cheerful[edit]

  1. There are several different operating systems (Windows, Apple Mac, Linux etc) so there will be problems in merging them all.
  2. Computers tend to have communications problems at the best of times - [1] etc.
  3. The next Carrington event.

Any more? 86.146.99.69 (talk) 21:48, 23 April 2017 (UTC)

Too religious for my taste[edit]

In my opinion, the "singularity" is too much similar to a religion, putting faith in computers and technology (I prefer not to use "science", as is too vague) as the saviors that will give us inmortality. --Panzerfaust (talk) 10:49, 28 May 2017 (UTC)

"Energy requirements are another issue"[edit]

We're nowhere near the Landauer limit, so this isn't actually a very good counterargument, but I don't want to delete your Mountain Dew joke. Hmmph (talk) 02:51, 2 December 2017 (UTC)

Seconding the above. The claims about energy demands made here are completely predicated on the assumption that electrical transistors will always be the default, which is a strong assumption not based on evidence. A number of groups have made substantial progress on extremely low-energy alternative computing hardware types in recent years, some of them ironically based on the principles and/or architecture of the brain. The dorito joke is fantastic, though. Maybe rephrase as a comment about how we don't know if computing will ever get as efficient as human computation - noting the dorito fueling? Template:Unisg

It already happend argument[edit]

I take issue with this part of the article as it seems to imply that the singularity already happend. This ignore the fact that the very point of the singularity relies on a strong AI to produce stronger AI until ____. Tabula Rasa (talk) 04:39, 21 December 2018 (UTC)

The basic idea is that technological singularity is represented by the swastika symbol and has already happened many times in history. The main purpose of the singularity is to create convergence which is the opposite of overpopulation. There are many types of convergence, such as spiritual, dimensional, biological, organizational etc. The idea is that overpopulation and resource shortage is the main cause of all bad things and convergence is the solution to this problem. However, because of the high amounts of power in the galaxy, abuse is also possible if the civilization is not careful. The side effects have also happened many times throughout history as well. — Unsigned, by: 24.80.103.237 / talk


I think it's also worth noting that some will argue we've experienced other singularities in the past. Salient examples claimed by more moderate writers would be the advent of agriculture, the industrial revolution, the electronic transistor, and perhaps even as we speak the internet. The Kurzweil et al singularitarians like to make wild and unjustified claims since their income is predicated upon hype, which definitely deserves skepticism (and often derision, some would argue). However, I think it's worth noting the more moderate view that a singularity is some point where humans can't properly conceive of what the world will look like on the far side. Each time there are features which are preserved, but Yuval Harari's book "Sapiens" offers an interesting perspective on just how radically the world and the human experience has been altered with each of those advances. Immortality, god-like AI, etc. may be bunk claims in the context of presently available knowledge, but we don't need the kinds of things Kurzweil et al claim in order to experience a singularity - potentially one that's detrimental to our species' odds of survival. One can more plausibly claim we should be cautiously guarding against any future analog to just how radically nuclear weapons have increased our risk of extinction. My bias is to view any form of potential singularity as a threat which needs to be mitigated to whatever degree is possible in advance, though, and I want to be explicit about that. — Unsigned, by: 128.12.122.86 / talk / contribs