It’s just an incredibly weak defense. Why is it worse for C to use an extra decimal for these differences? I can just as well argue that C is a more accurate representation, because small differences in temperature are smaller. Just like your argument, this is purely an opinion - until you can show me that not needing the extra decimal is objectively better, or until I can show you that smaller differences being represented as such is objectively better, neither of them holds any weight.
It’s the same reason we use abbreviations and contractions when speaking. A trivial simplification is still a simplification.
Why bother with Celcius at all when there is Kelvin. Even Kelvin is arbitrary. Best to use Plank normalized temperature. The scale would be absolute 0 to 100 where 0 is absolute 0 and 100 is 10^32 Kelvin.
So whenever you have to tell someone the temperature outside, you say it’s 0.000000000000000000000000015237 Planks
If 3 digits isn’t more a tiny bit more cumbersome than 2, then 32 digits is fine too.
We don’t have issues with decimals in many places. For example, why are there pennies? Why aren’t dollars just scaled up 100? Generally speaking: why don’t people immediately shift to the lower unit when talking about e.g. 3.5 miles? If you’re correct, those should be simplified too - yet they aren’t.
Why bother with Celcius at all when there is Kelvin.
Because Celsius uses a scale that relies on temperatures you’re encountering in your everyday life.
Even Kelvin is arbitrary. Best to use Plank normalized temperature. The scale would be absolute 0 to 100 where 0 is absolute 0 and 100 is 10^32 Kelvin.
Because Celsius uses a scale that relies on temperatures you’re encountering in your everyday life.
But that’s the same reason given for Farenheit!
Why? That scale is still arbitrarily chosen
It’s not arbitrary in that it represents the fundamental limits of temperature in the universe. Planck units are fundamental to the nature of the universe rather than based on any arbitrary object.
I would also argue that Fahrenheit is better-suited for everyday life than Kelvin is. Both Celsius and Fahrenheit are objectively closer to temperatures we encounter. Fahrenheit being closer than Celsius is subjective. Do you understand?
It’s not arbitrary in that it represents the fundamental limits of temperature in the universe.
There are still a bunch of arbitrary decisions:
what is your minimum and maximum (e.g. why 0/100? Why not 0/1?)
what does zero represent (e.g. why is 0 minimum? Why not center?)
how do you scale (e.g. linear/logarithmic)
All of these are arbitrary decisions you’ve made when you suggested Planck temperature with a scale from 0 to 100. Do you understand?
Fahrenheit being closer than Celsius is subjective. Do you understand?
Given that you already said you have to use 3 digits to give Celsius the range that matches human temperature sensing, that’s not true. 1 degree F is the average threshold that humans can perceive a difference in temperature. It’s why thermostats use 3 digits for Celsius but only 2 for Farenheit.
The only reason you say C matches people is because you are used to 21.5 C being a regular indoor temperature. If you grew up with Kelvin that would be 294.5 K. Three digits instead of four.
what is your minimum and maximum
Doesn’t matter. Base 10 would be better so it matches the rest of metric. The decimal place shifts one space but that doesn’t change the number of digits needed to represent a temperature.
Zero is absolute zero. You can’t have below zero because temperature is a measure of motion.
The point was they need that extra decimal because C isn’t good for human temperature sense.
It’s not like you are prohibited from using decimals in Fahrenheit. It’s that you don’t need 3 digits because it works better for people.
And fuck you for making me defend the most ass backwards measurement system on the planet.
It’s just an incredibly weak defense. Why is it worse for C to use an extra decimal for these differences? I can just as well argue that C is a more accurate representation, because small differences in temperature are smaller. Just like your argument, this is purely an opinion - until you can show me that not needing the extra decimal is objectively better, or until I can show you that smaller differences being represented as such is objectively better, neither of them holds any weight.
It’s the same reason we use abbreviations and contractions when speaking. A trivial simplification is still a simplification.
Why bother with Celcius at all when there is Kelvin. Even Kelvin is arbitrary. Best to use Plank normalized temperature. The scale would be absolute 0 to 100 where 0 is absolute 0 and 100 is 10^32 Kelvin.
So whenever you have to tell someone the temperature outside, you say it’s 0.000000000000000000000000015237 Planks
If 3 digits isn’t more a tiny bit more cumbersome than 2, then 32 digits is fine too.
We don’t have issues with decimals in many places. For example, why are there pennies? Why aren’t dollars just scaled up 100? Generally speaking: why don’t people immediately shift to the lower unit when talking about e.g. 3.5 miles? If you’re correct, those should be simplified too - yet they aren’t.
Because Celsius uses a scale that relies on temperatures you’re encountering in your everyday life.
Why? That scale is still arbitrarily chosen.
But that’s the same reason given for Farenheit!
It’s not arbitrary in that it represents the fundamental limits of temperature in the universe. Planck units are fundamental to the nature of the universe rather than based on any arbitrary object.
https://en.m.wikipedia.org/wiki/Planck_units
I would also argue that Fahrenheit is better-suited for everyday life than Kelvin is. Both Celsius and Fahrenheit are objectively closer to temperatures we encounter. Fahrenheit being closer than Celsius is subjective. Do you understand?
There are still a bunch of arbitrary decisions:
All of these are arbitrary decisions you’ve made when you suggested Planck temperature with a scale from 0 to 100. Do you understand?
Given that you already said you have to use 3 digits to give Celsius the range that matches human temperature sensing, that’s not true. 1 degree F is the average threshold that humans can perceive a difference in temperature. It’s why thermostats use 3 digits for Celsius but only 2 for Farenheit.
The only reason you say C matches people is because you are used to 21.5 C being a regular indoor temperature. If you grew up with Kelvin that would be 294.5 K. Three digits instead of four.
Doesn’t matter. Base 10 would be better so it matches the rest of metric. The decimal place shifts one space but that doesn’t change the number of digits needed to represent a temperature.
Zero is absolute zero. You can’t have below zero because temperature is a measure of motion.
Linear to match the rest of the metric system.