Microsoft's Bing AI Chatbot Starts Threatening People
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
It's crazy that we are giving all these young CEO's like ChatGPT's Sam Altman so much power. He's like 37 and making all of these pie in the sky promises. How do we not he's not some sociopath? He doesn't care what he unleashes. Everyone thought Sam Bankman-Fried knew what he was talking about too. Total fraud.
NBC published a story about Altman yesterday. I am not impressed. He loves China too and says he enjoys expressing his more "controversial ideas" over there. Great.
[…]
Yes the AI bots start off as lines of code, but that code mimmicks the neural network of humans, and is designed to be self-learning - that is, to write more code on its own based on interactions with the world. This organic process will proceed over millions and billions of interactions.
[…]
These language models don’t run code, they just perform math on giant trees of probability numbers.
They don’t self learn, they are trained, they have to be told what is an acceptable outcome.
There are parameters, but they 100% self-learn within those parameters.
That's a central feature of AI
We may be saying the same thing but that is not my understanding.
They are trained on data that pairs together the desired output for a given input. The parameters (weights) are adjusted until it produces the desired output. This learning process can be run with automation, and if you are lucky the data won’t need much manual marking, as is the case with ChatGPT.
How about something like ChatGPT? Well, it has the nice feature that it can do “unsupervised learning”, making it much easier to get it examples to train from. Recall that the basic task for ChatGPT is to figure out how to continue a piece of text that it’s been given. So to get it “training examples” all one has to do is get a piece of text, and mask out the end of it, and then use this as the “input to train from”—with the “output” being the complete, unmasked piece of text. We’ll discuss this more later, but the main point is that—unlike, say, for learning what’s in images—there’s no “explicit tagging” needed; ChatGPT can in effect just learn directly from whatever examples of text it’s given.
We know technology in the future will evolve. If you think that AI won't ever learn on their own OR understand how to self-preserve, defend or even learn to seriously harm others you are underestimating their capabilities.
Every person does not have good intentions or has the same decent morals and values.
Let's not be naive. You will see Ai created for evil or manipulative intentions by people here or abroad.
We know technology in the future will evolve. If you think that AI won't ever learn on their own OR understand how to self-preserve, defend or even learn to seriously harm others you are underestimating their capabilities.
Every person does not have good intentions or has the same decent morals and values.
Let's not be naive. You will see Ai created for evil or manipulative intentions by people here or abroad.
The key inflection point, which will 100.00% happen, is when advanced AI and advanced robotics combine to create fully independent and autonomous new life forms that have no need for humans and are only limited in reproduction to the natural resources they can mine and the energy they can generate.
Whoever wins the race to control these cyborg beings will easily rule the world. God help us if they control themselves or tyrants get control
I will add a scary thought based on the surprising and unexpected abilities of these scaled up large language models.
Nobody knows how they really work internally and (almost) nobody really expected to see the qualitative improvements that appeared just by going larger.
But now that it has been demonstrated that scaling up creates new unexpected abilities, and the scaling law appears to still apply for at least several orders of magnitude more.
GPT-3/ChatGPT is expensive to build by conventional standards, but it is dirt cheap by scientific/commercial/military/government budget standards.
There has to be somebody right now with a big budget building and training a model 1000x larger than what we have seen, and who ever isn’t building one right now will be left behind.
“… its scaling continues to be roughly logarithmic/power-law, as it was for much smaller models & as forecast, and it has not hit a regime where gains effectively halt or start to require increases vastly beyond feasibility. That suggests that it would be both possible and useful to head to trillions of parameters (which are still well within available compute & budgets, requiring merely thousands of GPUs & perhaps $10–$100m budgets assuming no improvements which of course there will be…”
Last edited by Ken_N; 03-03-2023 at 05:03 AM..
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.