Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
If you're trying to be funny, that's fine... but you should leave some sort of clue.
Otherwise someone might take you seriously.
It was a joke, but there is an element of truth there too.
Quad core processors are a necessity for most home users.
One core dedicated to run the virus scanner.
One core dedicated to install Windows updates
One core dedicated to running the Aero interface
One core to do actual useful work.
That was a joke too, but again, there is an element of truth.
Location: Mableton, GA USA (NW Atlanta suburb, 4 miles OTP)
11,334 posts, read 26,006,593 times
Reputation: 3990
Quote:
Originally Posted by NJBest
If we come to a point where speed doesn't matter to consumers anymore, that just means that we're slacking in software innovation.
Only if "software innovation" to you means "requires a lot more CPU", which these days generally means "using more bloated libraries and frameworks to generate the same core functionality we've had for two decades but in a prettier package".
Video and network/bandwidth seem to be the primary bottlenecks these days.
Only if "software innovation" to you means "requires a lot more CPU", which these days generally means "using more bloated libraries and frameworks to generate the same core functionality we've had for two decades but in a prettier package".
Video and network/bandwidth seem to be the primary bottlenecks these days.
Nope. Nice try though.
Bandwidth surely is a bottleneck, as is storage (HDD/SSD)... but so is CPU. Most people don't consider CPU as a bottleneck since software typically does not make it to market until processing power is available. Which is different than network applications. There's many areas of software that with the the help of faster CPUs can have a great impact on how the average consumer uses computers today. For example, contextual analysis, perception, and inference.
Location: Mableton, GA USA (NW Atlanta suburb, 4 miles OTP)
11,334 posts, read 26,006,593 times
Reputation: 3990
Quote:
Originally Posted by NJBest
Nope. Nice try though.
Bandwidth surely is a bottleneck, as is storage (HDD/SSD)... but so is CPU. Most people don't consider CPU as a bottleneck since software typically does not make it to market until processing power is available. Which is different than network applications. There's many areas of software that with the the help of faster CPUs can have a great impact on how the average consumer uses computers today. For example, contextual analysis, perception, and inference.
Can you give me a real world example of an application that would be used by a typical consumer?
Can you give me a real world example of an application that would be used by a typical consumer?
Currently, you can't go to your computer and say in natural language "Create a presentation that shows the link between smoking an cancer". There's two issues here. First, computers cannot understand natural language. For example, "call a taxi for me" and "fetch me a cab" mean the same thing in natural language, but computers (at the consumer level) cannot see it as the same thing.
The other issue is that for a computer to effectively gather and organize information about a relationship between to entities, it needs to understand context. Watson has showed that we're getting very close, but Watson is a supercomputer full of many many CPUs.
We wrap ourselves around ways computers input and out data. In the future, we'll be able to have computers that understand us in our natural environment using our natural methods of communication. This includes, speech, gestures, and even facial expressions.
Devices such as Siri, Wii, and Kinect are just the beginning. As software improves and is able to harness all that CPU power effectively, it will be for more than entertainment.
Another good example is search. I can't effectively search through my pictures. I can't ask my computer to search for all the pictures that have my dog in it. No one would argue that this is impossible... but the software and cpu power to do so is not available at the consumer level.
Location: Mableton, GA USA (NW Atlanta suburb, 4 miles OTP)
11,334 posts, read 26,006,593 times
Reputation: 3990
Quote:
Originally Posted by NJBest
Currently, you can't go to your computer and say in natural language "Create a presentation that shows the link between smoking an cancer". There's two issues here. First, computers cannot understand natural language. For example, "call a taxi for me" and "fetch me a cab" mean the same thing in natural language, but computers (at the consumer level) cannot see it as the same thing.
Hmmm. Let's go back in history.
OS/2 Warp 4 was a PC desktop OS released by IBM with trainable voice dictation and voice navigation in the standard package ... in 1996. Some early boxes had a headset microphone in the box. I had two.
That was 15 years ago. It worked rather well, from what I remember, well enough for people who were patient enough to go through the training process to use voice for most of their common desktop operations, to write documents, etc. I played with it some, but I'm faster typing with my hands than I am speaking, so I didn't really find much use for it. I did know two people who completely switched to using it, though.
At that point in time, the high-end boxes were dual-CPU 200Mhz Pentium Pro boxes (single-core 686, Socket 8) with perhaps 128MB or 256MB of RAM. IBM's VoiceType Dictation required training, but it was relatively sophisticated, and it would work on a machine with considerably lesser power than the above.
We already have over 10 times the CPU power in each core on existing machines. 200MHz is only 0.2GHz, and the Windows 95 version of IBM's product claimed it would run on a 90 MHz Pentium. That's a 586 chip, which is a far cry from a 686 with 32-bit code. Maybe 0.07 GHz?
You really think natural language processing will require that much more than the dictation software back then? Voice recognition software is not my speciality, but I find that hard to believe. What you describe above appears to be the result of a very sophisticated action recognition engine, but the voice processing part of it seems trivial to me, and the database that would associate smoking and cancer and create a presentation would be the main part of the work.
Such associations are prime targets for preprocessing. Generating relational tables is one of the main ways to speed up any processing like that. If you want to do it on the fly all the time, I would probably question your seriousness about solving the problem.
Quote:
The other issue is that for a computer to effectively gather and organize information about a relationship between to entities, it needs to understand context. Watson has showed that we're getting very close, but Watson is a supercomputer full of many many CPUs.
The basic understanding of complex relationships is a very different problem from the ability to generate such relationships on the fly in real time.
I can see that taking a lot of CPU power, but I question how relevant it is to consumer software products in the near future.
Quote:
We wrap ourselves around ways computers input and out data. In the future, we'll be able to have computers that understand us in our natural environment using our natural methods of communication. This includes, speech, gestures, and even facial expressions.
Devices such as Siri, Wii, and Kinect are just the beginning. As software improves and is able to harness all that CPU power effectively, it will be for more than entertainment.
Mainframes have had separate dedicated I/O processors for decades for a good reason ... there's no need to use core CPU power if you can embed a dedicated CPU in your peripheral devices.
You're talking about expansions to the base hardware platform, but those may or may not require more processing power at the core.
Quote:
Another good example is search. I can't effectively search through my pictures. I can't ask my computer to search for all the pictures that have my dog in it. No one would argue that this is impossible... but the software and cpu power to do so is not available at the consumer level.
I would argue that the main bottleneck is media speed in the above instance, since you have to filter through a rather large amount of mass storage during that process. Obviously, some way to build an index dynamically at the point one adds each new file is one of the better ways to handle it. But how does one recognize images? That's an issue which requires smart algorithms, yes, but who knows how much CPU would be needed. If well implemented, I suspect not a lot...
I dunno...
Last edited by rcsteiner; 11-19-2011 at 02:48 AM..
OS/2 Warp 4 was a PC desktop OS released by IBM with trainable voice dictation and voice navigation in the standard package ... in 1996. Some early boxes had a headset microphone in the box. I had two.
That was 15 years ago. It worked rather well, from what I remember, well enough for people who were patient enough to go through the training process to use voice for most of their common desktop operations, to write documents, etc. I played with it some, but I'm faster typing with my hands than I am speaking, so I didn't really find much use for it. I did know two people who completely switched to using it, though.
At that point in time, the high-end boxes were dual-CPU 200Mhz Pentium Pro boxes (single-core 686, Socket 8) with perhaps 128MB or 256MB of RAM. IBM's VoiceType Dictation required training, but it was relatively sophisticated, and it would work on a machine with considerably lesser power than the above.
We already have over 10 times the CPU power in each core on existing machines. 200MHz is only 0.2GHz, and the Windows 95 version of IBM's product claimed it would run on a 90 MHz Pentium. That's a 586 chip, which is a far cry from a 686 with 32-bit code. Maybe 0.07 GHz?
You really think natural language processing will require that much more than the dictation software back then? Voice recognition software is not my speciality, but I find that hard to believe. What you describe above appears to be the result of a very sophisticated action recognition engine, but the voice processing part of it seems trivial to me, and the database that would associate smoking and cancer and create a presentation would be the main part of the work.
Such associations are prime targets for preprocessing. Generating relational tables is one of the main ways to speed up any processing like that. If you want to do it on the fly all the time, I would probably question your seriousness about solving the problem.
The basic understanding of complex relationships is a very different problem from the ability to generate such relationships on the fly in real time.
I can see that taking a lot of CPU power, but I question how relevant it is to consumer software products in the near future.
Yes... it's significantly more complex than voice recognition. Modern databases use relations that are defined by humans after the normalizing process. So the association of every two potential entities would not pre-exist in a database... atleast in any current type of database we have available to us today.
But that's not the issue. Finding the data is not a problem. Even relating it is not an issue. But determining what is relevant and how to organize and display it is. You seem to be confusing being able to store and retrieve data with a computer's ability to understand it in context.
Quote:
Originally Posted by rcsteiner
Mainframes have had separate dedicated I/O processors for decades for a good reason ... there's no need to use core CPU power if you can embed a dedicated CPU in your peripheral devices.
You're talking about expansions to the base hardware platform, but those may or may not require more processing power at the core.
The peripheral would be the microphone or sensors. Making sense of the input would have to be done on the CPU for obvious reasons. I don't even know how you could argue that the peripheral would responsible for making sense of input.
Quote:
Originally Posted by rcsteiner
I would argue that the main bottleneck is media speed in the above instance, since you have to filter through a rather large amount of mass storage during that process. Obviously, some way to build an index dynamically at the point one adds each new file is one of the better ways to handle it. But how does one recognize images? That's an issue which requires smart algorithms, yes, but who knows how much CPU would be needed. If well implemented, I suspect not a lot...
I dunno...
Quite a bit of cpu power unless you want to have to wait for the index to build every time you unload your SD card.
There's many areas of software that with the the help of faster CPUs can have a great impact on how the average consumer uses computers today. For example, contextual analysis, perception, and inference.
CPU clock speed, sure. But more cores? What is the advantage to having 6, 8, 12+ cores when the average consumer barely uses two? For a bit of clarification, by consumer I mean the Average Joe. I understand that those in the audio/video/design/scientific/etc. communities, who more-often-than-not benefit from multi-core processors, are consumers, too. Not talking about them.
I already have more than I need with this quad core AMD. I'd rather spend the money on good steaks.
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.