• 0 Posts
  • 5 Comments
Joined 2 years ago
cake
Cake day: August 26th, 2023

help-circle

  • I believe that there isn’t because, at least as I understand it, it costs considerably more money than it makes to operate it. It gets less bad at scale like Google has achieved, but they still lose money on it, so it seems doubtful that anyone else could break in and beat them at that game.

    They continue to operate it because they get a variety of other things from it that don’t directly make the YouTube division money, but feed into their other divisions where they collectively generate considerably more money than YouTube loses from it, in the form of user personal preference and demographic data collection, network affect driving users to their other services, and direct tie in to their juggernaut ad network.

    Its not too much different than “loss leader” products at the store, which you may have heard of. The idea is that you take some product that almost everyone frequently buys, like milk or eggs, and you put one of the lower priced ones on extreme sale, often even at a loss, essentially all the time to drive traffic to your store. You put those products way in the back, and customers will often buy a bunch of other stuff on their way to and from getting it. The profits from the other products well exceed the loss on the “loss leader” one, even when some customers only by the loss leader, so the store considers it a winning strategy.



  • For. Real.

    I switched over some months ago now and tried several different distributions before finally settling on one that mostly could be made to work with everything, as many of them had one or more hardware dealbreaker that prevented it from working out. I think its also fair to mention that while many things did just work “out of the box” on all of them, many also did not. Some were able to be cajoled into cooperating after varying amounts of troubleshooting, editing and general trial and error effort, but there are huge swaths of the user experience that are about as unpolished and manual as they were at the turn of the century.

    I still prefer using it to Windows 11, and it has improved a lot over the years, but I think the main thing that has made Linux increase in appeal over time is the relative continual decline in the quality and behavior of Windows.

    I’m sure a lot of these hindrances can be addressed by building or buying a computer purpose-built to run Linux, but I think the point stands that unless you just use your PC for the “Facebook, Email, YouTube” type of stuff, you’re going to run into things you have to do that require quite a bit of research to get to work.

    Don’t get me wrong; I don’t regret my decision in the slightest. Linux offers you very real ownership of your computer and user experience, but it is just absolutely not for everyone, and I hope the Linux community at large one day grows to acknowledge that the tinkering and troubleshooting that many of them are not troubled by, and some of them even get enjoyment from, is fine with them because they are hobbyists and professionals. People outside that sphere see computers more exclusively as tools than hobbies, and tools that often give you trouble and take away your time are worse than similar ones that don’t.


  • One could argue that if the AI response is not distinguishable from a human one at all, then they are equivalent and it doesn’t matter.

    That said, the current LLM designs have no ability to do that, and so far all efforts to improve them beyond where they are today has made them worse at it. So, I don’t think that any tweaking or fiddling with the model will ever be able to do anything toward what you’re describing, except possibly using a different, but equally cookie-cutter way of responding that may look different from the old output, but will be much like other new output. It will still be obvious and predictable in a short time after we learn its new obvious tells.

    The reason they can’t make it better anymore is because they are trying to do so by giving it ever more information to consume in a misguided notion that once it has enough data, it will be overall smarter, but that is not true because it doesn’t have any way to distinguish good data from garbage, and they have read and consumed the whole Internet already.

    Now, when they try to consume more new data, a ton of it was actually already generated by an LLM, maybe even the same one, so contains no new data, but still takes more CPU to read and process. That redundant data also reinforces what it thinks it knows, counting its own repetition of a piece of information as another corroboration that the data is accurate. It thinks conjecture might be a fact because it saw a lot of “people” say the same thing. It could have been one crackpot talking nonsense that was then repeated as gospel on Reddit by 400 LLM bots. 401 people said the same thing; it MUST be true!