Originally posted by @mghrn55 You spent considerable time putting this together, didn't you ? 😆
It was off the cuff mghm, TANSTAAFL explained clearly and concisely with the reason it did not apply to shav's vege garden (his question). It takes some effort to talk down to the level of someone that can only focus on two words.
But you can see the extent of the shav's imagination when his mind is working overtime 'you're a retard'. Then shav's mini-me wolfgang pipes in 'yeah, he's a retard.'
I don't think knowledge is the issue, I think it is the wisdom on how to apply knowledge that is the problem. Essentially you are creating an ever expanding intelligence that does not have the ability to empathize with human beings.
So let's say you feed the AI the notion that human beings are destroying the planet via carbon emiss ...[text shortened]... only will you have mass unemployment, you will have a docile and dependent society based on AI.
Its impossible to know how things will turn out. Just as we could end up in a nightmarish Terminator/Matrix future, we could also end up in an Ian Banks - Culture novel.
The tricky question for me is what motivation will AI have for doing anything, what's going to make it care either-way? Its entirely possible that it wont and put its self on stand-by for the rest of time.
Originally posted by @wajoma It was off the cuff mghm, TANSTAAFL explained clearly and concisely with the reason it did not apply to shav's vege garden (his question). It takes some effort to talk down to the level of someone that can only focus on two words.
But you can see the extent of the shav's imagination when his mind is working overtime 'you're a retard'. Then shav's mini-me wolfgang pipes in 'yeah, he's a retard.'
Kind of embarrassing for those guys.
LOL
Be happy - we are all laughing at you - but be happy.
Dunning-Kruger.