Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
This is just wrong. The manual 'long form math' that we are talking about is not required for anything in computing. The more relationships the kids see and understand in the numbers, as are shown by the steps used with common core's approach to math, the better; that is what will allow the child to understand advanced math or be able to write computer algorithms. In fact the way the kids do math in common core is the way math is done on a computer with its array of logic gates, only with humans we use a 'base 10' approach while on a computer we usually do a 'base 16' approach.
I write computer programming a lot, and I dont need at all to know the relationship in the numbers, I need to know how to tell the computer to get the answers and a computer doesnt use anything close to cc to do that..
it doesnt break the following math problem
33+88 into
30 + 80 + 3 + 8 = 30 + 80 + 10 + 1 to get 121
It goes 33 + 88 PERIOD. And if it breaks it up it goes 8 = 3 = 11 + 30 + 80 which is the old way.. not even close to CC
There isnt a dam thing about computers similar to CC despite your claims.
This is just wrong. The manual 'long form math' that we are talking about is not required for anything in computing.
Nonsense.
Traditional right to left, column-wise 2+ digit arithmetic is the first for/do-while algorithm that also teaches the if then condition for moving a value into a new column/struct.
Traditional right to left, column-wise 2+ digit subtraction is the first for/do-while algorithm that teaches testing an if-then before each step of the calculation, and then grabbing data from another column/struct depending on the result of the test.
Traditional right to left, column-wise 2+ digit multiplication is the first nested for/do-while of successive operations (if you view it as column wise successive addition according to the "for i <= x; i++" basis traditional multiplication uses.
Traditional long division teaches if-then, for/do-while, and multiple outputs called product and remainder.
Now, do binary and hex arithmetic using the fuzzy, "math story/situation" methods of common core, and see how well it bumps up against traditional algorithm based arithmetic upon which computers are based. Traditional arithmetic is everyone's first exposure to STANDARD algorithms. So while most people aren't building micro control stores for Intel and writing Assembly programs/apps for Ti calculators and other low level systems, anyone working in computers in virtually any way besides artists and point-click commandos is using the mental skills they should have learned in traditional arithmetic all the freaking time.
Quote:
Originally Posted by HiFi
The more relationships the kids see and understand in the numbers, as are shown by the steps used with common core's approach to math, the better; that is what will allow the child to understand advanced math or be able to write computer algorithms.
That's nonsense, especially for advanced math. Go ahead and show me the fuzzy math way to divide polynomials. Show me fuzzy, math story/situation for arithmetic with fractions, exponents and variables. By all means.
Now, show me how teaching kids to disregard placewise, right to left operations with carries, while conjuring up their own methods for addition and subtraction, helps them understand the concept of carry/overflow flags in computer systems.
Quote:
Originally Posted by HiFi
In fact the way the kids do math in common core is the way math is done on a computer with its array of logic gates, only with humans we use a 'base 10' approach while on a computer we usually do a 'base 16' approach.
Nonsense again.
Computers do binary math exactly according to standard traditional arithmetic, only instead of "placewise" it is "bitwise", and goes from 2^0 to 2^31 or more commonly these days 2^63, but it is still from the "right to the left" as traditional numeric writing places numbers. 10^0 is right most, and the powers of 10 ascend linearly going left, and the same is true in any 32 or 64 bit register. The math is done exactly the same way as traditional arithmetic. Hell, OR is exactly the same as traditional addition while AND is exactly like traditional multiplication. Do a nested set of ANDs if you doubt me.
And the larger meta disagreement is that computers do not have feelings or make judgments. They operate under a fixed, rigid set of instructions. So the human user may decide that today, arithmetic will be OK if it is close enough, but a computer will never operate that way. It will always be perfectly binary, with right = right, and wrong = wrong, garbage in, garbage out. The faster humans can be taught that the RULES OF MATH AND LOGIC DO NOT CARE HOW THEY FEEL, the better.
I don't think your kid was under common core, because clearly it is designed to teach the kid how to do math from the basis of why instead of just memorizing tables. It is what allows one to do math without crutches. Many kids of the old system could get by with memorization but did not actually understand anything and struggled when they got to advanced maths.
LOL....
Ok then..how does one "discover" the quadratic formula on their own ?
Because that was the issue he had. I had him memorize the formula ..the old fashioned way.
Because the WHY of all those variables comes to light in pre-calc after you have learned about imaginary numbers.
This is just wrong. The manual 'long form math' that we are talking about is not required for anything in computing. The more relationships the kids see and understand in the numbers, as are shown by the steps used with common core's approach to math, the better; that is what will allow the child to understand advanced math or be able to write computer algorithms. In fact the way the kids do math in common core is the way math is done on a computer with its array of logic gates, only with humans we use a 'base 10' approach while on a computer we usually do a 'base 16' approach.
I worked on math libraries for 5 years. It certainly IS required and that's how computers compute.
This is just wrong. The manual 'long form math' that we are talking about is not required for anything in computing. The more relationships the kids see and understand in the numbers, as are shown by the steps used with common core's approach to math, the better; that is what will allow the child to understand advanced math or be able to write computer algorithms. In fact the way the kids do math in common core is the way math is done on a computer with its array of logic gates, only with humans we use a 'base 10' approach while on a computer we usually do a 'base 16' approach.
Very few people write assembler code where logic gates are needed.
And at it's very core a computer works in binary..1 or 0.
I write computer programming a lot, and I dont need at all to know the relationship in the numbers, I need to know how to tell the computer to get the answers and a computer doesnt use anything close to cc to do that..
it doesnt break the following math problem
33+88 into
30 + 80 + 3 + 8 = 30 + 80 + 10 + 1 to get 121
It goes 33 + 88 PERIOD. And if it breaks it up it goes 8 = 3 = 11 + 30 + 80 which is the old way.. not even close to CC
There isnt a dam thing about computers similar to CC despite your claims.
CC and computer programs have the concept of number resolution, or digits of significance, or bytes of significance, or bits of significance. On the left of the number is the most significant digit or most significant byte or most significant bit, as it represents the largest value. On a computer when you have a grid for example a spreadsheet of cells or a video game made up of tiles the width/height of each cell or tile becomes a resolution of significance. when you have a mouse coordinate over that spreadsheet or game environment and you are seeing which tile it is in you discard the bits which are lower than that width; that number represents the tile you are in. then when you want to see how far the mouse cursor is inside of that tile; that's where you consider those bits you discarded off, and only those bits are the number which represents how far you are inside of the tile. you operate in 'grid space' and in 'tile space' just like in CC you can add numbers in base 10 resolution or base 100 resolution, depending what is important. or you can divide in certain resolution (bit shift) to get the information you need. in CC you learn to manipulate numeric values in your head to get them to a useful resolution just like you need to do when programming. There are lots of instances in especially graphics/rendering/simulated physics when you mod or shift or mask to only consider the significant portion or the resolution of that number you are interested in/that is relevant. There are lots of instances in communication protocols or data formats where you are working with only the data which you need, whether you are packing the information into the least possible bit space or pulling a few bits out of a protocol which are shifted to represent a more significant portion of the number, or whether you are discarding resolution on something that does not need to be precise or which always falls on certain multiples like in audio/video samples which are truncated to match the bitrate or lossy compression or videogame graphics that use only powers of 2 for certain things. A person doing common core is breaking large numbers down like a computer breaks it down to bits, then knows which peices/bits to operate on like a programmer does.
CC and computer programs have the concept of number resolution, or digits of significance, or bytes of significance, or bits of significance. On the left of the number is the most significant digit or most significant byte or most significant bit, as it represents the largest value. On a computer when you have a grid for example a spreadsheet of cells or a video game made up of tiles the width/height of each cell or tile becomes a resolution of significance. when you have a mouse coordinate over that spreadsheet or game environment and you are seeing which tile it is in you discard the bits which are lower than that width; that number represents the tile you are in. then when you want to see how far the mouse cursor is inside of that tile; that's where you consider those bits you discarded off, and only those bits are the number which represents how far you are inside of the tile. you operate in 'grid space' and in 'tile space' just like in CC you can add numbers in base 10 resolution or base 100 resolution, depending what is important. or you can divide in certain resolution (bit shift) to get the information you need. in CC you learn to manipulate numeric values in your head to get them to a useful resolution just like you need to do when programming. There are lots of instances in especially graphics/rendering/simulated physics when you mod or shift or mask to only consider the significant portion or the resolution of that number you are interested in/that is relevant. There are lots of instances in communication protocols or data formats where you are working with only the data which you need, whether you are packing the information into the least possible bit space or pulling a few bits out of a protocol which are shifted to represent a more significant portion of the number, or whether you are discarding resolution on something that does not need to be precise or which always falls on certain multiples like in audio/video samples which are truncated to match the bitrate or lossy compression or videogame graphics that use only powers of 2 for certain things. A person doing common core is breaking large numbers down like a computer breaks it down to bits, then knows which peices/bits to operate on like a programmer does.
not even close. Common core for example breaks
38 + 44 into
30 + 40 + 8 + 4
then
70 + 8 + 4
then
70 + 12
then
80 + 2
then 82..
BULL ****..
Add the dam 8 and 4 then carry then 1, add 3 plus 4 plus 1 to get 82.. DONE..
Computers dont break numbers down into 30's and 40's, then then adds the 8 and 4 seperately, they are all single digits, in fact as basic as one could get, it converts the numbers into 0's and 1's..
Where 1 + 4 = 1 + 100 = 101 which then converts back to a decimal of 5..
Thats not common core, not even close
There have been numerous individuals here who also have confirmed you are completely wrong, they also claim to have knowledge of programming and their synopsis as well confirmed mine..
38 + 44 into
30 + 40 + 8 + 4
then
70 + 8 + 4
then
70 + 12
then
80 + 2
then 82..
BULL ****..
Add the dam 8 and 4 then carry then 1, add 3 plus 4 plus 1 to get 82.. DONE..
Computers dont break numbers down into 30's and 40's, then then adds the 8 and 4 seperately, they are all single digits, in fact as basic as one could get, it converts the numbers into 0's and 1's..
Where 1 + 4 = 1 + 100 = 101 which then converts back to a decimal of 5..
Thats not common core, not even close
There have been numerous individuals here who also have confirmed you are completely wrong, they also claim to have knowledge of programming and their synopsis as well confirmed mine..
You're still not getting it.
a computer doing 38 + 44, in base 16 (hex) thats 26 + 2C
lets say it is a computer with a 4 bit adder
4 bits or half a byte is called a nibble
so each number is two nibbles 2,6 and 2,c
for the most significant nible it does 2 + 2
4
which represents 20 + 20
40 (like the kid did in common core)
then for the least significant nibble it does 6 + c
12
12 overflowed, its 5 bits, so you add the bit which overflowed to the most significant nibble and it becomes
50 and what is left on the least significant nibble is 2
so your answer is
50 2
52
convert that to base 10 and its
82
thats how the kid in CC did it, by breaking it down to 10s, and the program using the 4 bit adder broke it down to 16s, the adder itself broke it down to 2s (binary) same principle applies at all resolutions
a computer doing 38 + 44, in base 16 (hex) thats 26 + 2C
lets say it is a computer with a 4 bit adder
4 bits or half a byte is called a nibble
so each number is two nibbles 2,6 and 2,c
for the most significant nible it does 2 + 2
4
which represents 20 + 20
40 (like the kid did in common core)
then for the least significant nibble it does 6 + c
12
12 overflowed, its 5 bits, so you add the bit which overflowed to the most significant nibble and it becomes
50 and what is left on the least significant nibble is 2
so your answer is
50 2
52
convert that to base 10 and its
82
thats how the kid in CC did it, by breaking it down to 10s, and the 4 bit adder broke it down to 16s
Get it yet?
But it's not base 16..it's binary..1 and 0.
Base 16 is how you code it, not how the computer computes it.
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.