Looper's Delight Archive Top (Search)
Date Index
Thread Index
Author Index
Looper's Delight Home
Mailing List Info

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Date Index][Thread Index][Author Index]

Re: Interpreting the Singularity (O.T.)



You guys are only scraping the surface of the current knowledge.  Some 
books 
that would fill in what you're talking about include "On Intelligence" by 
Jeff Hawkins and "The Singularity is Near" by Kurzweil.  Also Ben Goertzel 
is an AI guru--his blog is chock full of singularity stuff.  It's probably 
good to mention that not everbody thinks like Ray.

However, if an AI is constructed that is smarter than a human and can self 
improve all bets are off.  Nobody knows what would be the result.  I don't 
feel very flaky in mentioning this.  Maybe 8 years ago I would feel weird 
believing that strong AI is coming but doing some(a lot) of deep reading 
about guys who are getting someplace (Hawkins for instance) has changed my 
mind.  His figuring out that the brain process that is important is 
prediction, coupled with "invariant" object representations has convinced 
me.

It ain't neural nets and genetic algorithms anymore.  Those were way off 
the 
mark.  They work for what they do but the brain doesn't work that way.

t


----- Original Message ----- 
From: "Per Boysen" <perboysen@gmail.com>
To: <Loopers-Delight@loopers-delight.com>
Sent: Wednesday, March 09, 2011 2:20 AM
Subject: Re: Interpreting the Singularity (O.T.)


Interesting thoughts, Andy! I remember in the sixties when people were
so scared about the growth of global population (of humans) as this
growth had been measured to follow a progressive curve. Some futurists
thought it would lead to "a demographic explosion" and a cruel
Armageddon scenario. So what happened? Well, the global population
growth curve simply stopped being progressive and found a new balance.

I listened to the linked video and must say those guys did not seem to
pick up on Kurzwell's extrapolation on the escalating progressive
curve. They seemed to present a more probable future scenario. The
main point, as I understand it, being the issue with different people
experiencing access to information differently. Boosting the technical
accessibility to "unlimited information" will not give the same result
for every individual, because what comes out to a great deal depends
on who is navigating this brand new super efficient synapse based
interface. And as our brains learn pretty fast there will at some
point be a progressive curve of people getting divided into classes.
Classes with different abilities to digest and thrive on the brave new
world.

I can't see why you wouldn't agree to that future scenario. If looking
around you may find that this process has already started. Not very
bright outlook though, in many ways the antithesis to the present
ideas of democracy and everyone's equal rights. I personally think
knowledge and technology is good but I often miss the political
perspective that could educate us in a way that we should be able to
set the controls for a target we want and avoid less wanted scenarios.

Per


On Wed, Mar 9, 2011 at 10:46 AM, andy butler <akbutler@tiscali.co.uk> 
wrote:
> Again this depends on the assumption that an exponential increase will
> extend "to infinity".
>
> In the history of the universe, no exponential
> increase ever did that yet.
>
> Look at the growth of any organism,
> in the early stages cell division is exponential,
> but there's no infinitely large carrot.
>
> andy
>
>
> Louie Angulo wrote:
>>
>> here is a further interesting take on Kurzwell philosophy from my
>> previous thread.
>>
>> http://www.youtube.com/watch?v=zK4gevQ5uqg
>>
>> cheers
>> Luis
>>
>>
>
>