Y2K specifically makes no sense though. Any reasonable way of storing a year would use a binary integer of some length (especially when you want to use as little memory as possible). The same goes for manipulations; they are faster, more memory efficient, and easier to implement in binary. With an 8-bit signed integer counting from 1900, the concerning overflows would occur in 2028, not 2000. A base 10 representation would require at least 8 bits to store a two digit number anyway. There is no advantage to a base 10 representation, and there never has been. For Y2K to have been anything more significant than a text formatting issue, a whole lot of programmers would have had to go out of their way to be really, really bad at their jobs. Also, usage of dates beyond 2000 would have increased gradually for decades leading up to it, so the idea it would be any sort of sudden catastrophe is absurd.
Some of the computers in question predate standardizing on 8 bits to the byte. You’ve got a whole post here of bad assumptions about how things worked.
The issue wasn’t using the dates. The issue was the computer believing it was now on those dates.
I’m going to assume you aren’t old enough to remember, but the “only two digits to represent the year” issue predates computers. Lots of paper forms just gave two digits. And a lot of early computer work was just digitising paper forms.
You’re thinking of the problem with modern solutions in mind. Y2K originates from punch cards where everything was stored in characters. To save space only the last 2 digits of the year because back then you didn’t need to store the 19 of year 19xx. The technique of storing data stayed the same for a long time despite technology advancing beyond punch cards. The assumption that it’s always 19xx caused the Y2K bug because once it overflows to 00 the system doesn’t know if it’s 1900 or 2000.
Oh boy you heavily underestimate the amount and level of bad decision in legacy protokoll. Read up in the toppic. the Date was for a loong time stored as 6 decimal numbers.
You do realize that “counting from 1900” meant storing only the last two digits and just hardcoding the programs to print"19" in front of it in those days? At best, an overflow would lead to 19100, 1910 or 1900, depending on the print routines.
And then there is PIC 99 in Cobol. In modern languages, it makes no sense, but there is still a lot of really old code around and not everything is twos complement, especially if you do not need the efficiency in memory and calculations.
Y2K specifically makes no sense though. Any reasonable way of storing a year would use a binary integer of some length (especially when you want to use as little memory as possible). The same goes for manipulations; they are faster, more memory efficient, and easier to implement in binary. With an 8-bit signed integer counting from 1900, the concerning overflows would occur in 2028, not 2000. A base 10 representation would require at least 8 bits to store a two digit number anyway. There is no advantage to a base 10 representation, and there never has been. For Y2K to have been anything more significant than a text formatting issue, a whole lot of programmers would have had to go out of their way to be really, really bad at their jobs. Also, usage of dates beyond 2000 would have increased gradually for decades leading up to it, so the idea it would be any sort of sudden catastrophe is absurd.
Some of the computers in question predate standardizing on 8 bits to the byte. You’ve got a whole post here of bad assumptions about how things worked.
You don’t spend much time around them, do you?
The issue wasn’t using the dates. The issue was the computer believing it was now on those dates.
I’m going to assume you aren’t old enough to remember, but the “only two digits to represent the year” issue predates computers. Lots of paper forms just gave two digits. And a lot of early computer work was just digitising paper forms.
I remember paper forms having “19__” in the year field. Good times
When I was little I was scared of that problem and always put all four digits of the year in my homework.
You’re thinking of the problem with modern solutions in mind. Y2K originates from punch cards where everything was stored in characters. To save space only the last 2 digits of the year because back then you didn’t need to store the 19 of year 19xx. The technique of storing data stayed the same for a long time despite technology advancing beyond punch cards. The assumption that it’s always 19xx caused the Y2K bug because once it overflows to 00 the system doesn’t know if it’s 1900 or 2000.
Look some info on BCD or EBCDIC.
Oh boy you heavily underestimate the amount and level of bad decision in legacy protokoll. Read up in the toppic. the Date was for a loong time stored as 6 decimal numbers.
You do realize that “counting from 1900” meant storing only the last two digits and just hardcoding the programs to print"19" in front of it in those days? At best, an overflow would lead to 19100, 1910 or 1900, depending on the print routines.
And then there is PIC 99 in Cobol. In modern languages, it makes no sense, but there is still a lot of really old code around and not everything is twos complement, especially if you do not need the efficiency in memory and calculations.