• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • Valve is a unique company with no traditional hierarchy. In business school, I read a very interesting Harvard Business Review article on the subject. Unfortunately it’s locked behind a paywall, but this is Google AI’s summary of the article which I confirm to be true from what I remember:

    According to a Harvard Business Review article from 2013, Valve, the gaming company that created Half Life and Portal, has a unique organizational structure that includes a flat management system called “Flatland”. This structure eliminates traditional hierarchies and bosses, allowing employees to choose their own projects and have autonomy. Other features of Valve’s structure include:

    • Self-allocated time: Employees have complete control over how they allocate their time
    • No managers: There is no managerial oversight
    • Fluid structure: Desks have wheels so employees can easily move between teams, or “cabals”
    • Peer-based performance reviews: Employees evaluate each other’s performance and stack rank them
    • Hiring: Valve has a unique hiring process that supports recruiting people with a variety of skills



  • CodeInvasion@sh.itjust.workstoScience Memes@mander.xyzHumor
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    1
    ·
    2 months ago

    It took Hawking minutes to create some responses. Without the use of his hand due to his disease, he relied on the twitch of a few facial muscles to select from a list of available words.

    As funny as it is, that interview, or any interview with Hawkins contains pre-drafted responses from Hawking and follows a script.

    But the small facial movements showing his emotion still showed Hawking had fun doing it.




  • I am an LLM researcher at MIT, and hopefully this will help.

    As others have answered, LLMs have only learned the ability to autocomplete given some input, known as the prompt. Functionally, the model is strictly predicting the probability of the next word+, called tokens, with some randomness injected so the output isn’t exactly the same for any given prompt.

    The probability of the next word comes from what was in the model’s training data, in combination with a very complex mathematical method to compute the impact of all previous words with every other previous word and with the new predicted word, called self-attention, but you can think of this like a computed relatedness factor.

    This relatedness factor is very computationally expensive and grows exponentially, so models are limited by how many previous words can be used to compute relatedness. This limitation is called the Context Window. The recent breakthroughs in LLMs come from the use of very large context windows to learn the relationships of as many words as possible.

    This process of predicting the next word is repeated iteratively until a special stop token is generated, which tells the model go stop generating more words. So literally, the models builds entire responses one word at a time from left to right.

    Because all future words are predicated on the previously stated words in either the prompt or subsequent generated words, it becomes impossible to apply even the most basic logical concepts, unless all the components required are present in the prompt or have somehow serendipitously been stated by the model in its generated response.

    This is also why LLMs tend to work better when you ask them to work out all the steps of a problem instead of jumping to a conclusion, and why the best models tend to rely on extremely verbose answers to give you the simple piece of information you were looking for.

    From this fundamental understanding, hopefully you can now reason the LLM limitations in factual understanding as well. For instance, if a given fact was never mentioned in the training data, or an answer simply doesn’t exist, the model will make it up, inferring the next most likely word to create a plausible sounding statement. Essentially, the model has been faking language understanding so much, that even when the model has no factual basis for an answer, it can easily trick a unwitting human into believing the answer to be correct.

    —-

    +more specifically these words are tokens which usually contain some smaller part of a word. For instance, understand and able would be represented as two tokens that when put together would become the word understandable.



  • I am a pilot and this is NOT how autopilot works.

    There is some autoland capabilities in the larger commercial airliners, but autopilot can be as simple as a wing-leveler.

    The waypoints must be programmed by the pilot in the GPS. Altitude is entirely controlled by the pilot, not the plane, except when on a programming instrument approach, and only when it captures the glideslope (so you need to be in the correct general area in 3d space for it to work).

    An autopilot is actually a major hazard to the untrained pilot and has killed many, many untrained pilots as a result.

    Whereas when I get in my Tesla, I use voice commands to say where I want to go and now-a-days, I don’t have to make interventions. Even when it was first released 6 years ago, it still did more than most aircraft autopilots.





  • I’m convinced that we should use the same requirements to fly an airplane as driving a car.

    As a pilot, there are several items I need to log on regular intervals to remain proficient so that I can continue to fly with passengersor fly under certain conditions. The biggest one being the need for a Flight Review every two years.

    If we did the bare minimum and implemented a Driving Review every two years, our roads would be a lot safer, and a lot less people would die. If people cared as much about driving deaths as they did flying deaths, the world would be a much better place.


  • Oh yes, it costs me $7k a year for the pleasure of managing a property, responding to all the tenants needs, the risk of paying for major future repairs, trusting the tenant to pay on time and in full (collections is practically impossible to enforce), dealing with vacancies while I still pay the mortgage, paying real estate agent fees which amounts to a month’s rent every time I get a new tenant. And that’s all for a house that I am not able to live in, and that I have locked up 20% of the house’s value for a down payment. It’s much more profitable just to let that money sit in the stock market instead.

    But please tell me more about how you know better and that’s it’s all sunshine and rainbows for a non-corporate landlord.


  • Everyone here loves to complain about landlords without realizing that the majority of single family home landlords (not corporate landlords) are barely making it by too.

    Banks are really the ones making criminal amounts of money. 1/3 of rent is typically interest payments. 1/3 of rent then goes to taxes.

    For instance, I make $2,900/mo. from rent, but pay $2,800/mo. for the mortgage. I’ve spent over $8k this year alone on repairs and maintenance. But please continue to complain how landlords are constantly raking in cash. It’s typical for a homeowner to pay 1% of the cost of the property per year to maintain it. I will never see a positive cash flow until the mortgage is paid off in 25 years. The only benefit I get by continuing to own the property is the appreciation in equity and principle payments to the mortgage. At the end of the year we will have a -$7k cash flow and $5k equity appreciation. In a HCOL area, that $5k on paper is less than 3% of the area’s median yearly salary.

    I feel for anyone out there who has a landlord that didn’t consider the hidden costs and the fact they should expect to runa negative cash flow, because it’s those landlords that also can’t afford to fix the house you might be renting.


  • By ungodly experiments, he means your typical round of vaccinations.

    Also, there’s a waiver for just about anything in the military. If there’s an actual medical concern with vaccinations, then you can apply for a waiver. The problem is when people confuse an actual medical condition with a conspiracy theory they read on the internet.