• 2 Posts
  • 1.46K Comments
Joined 3 years ago
cake
Cake day: June 15th, 2023

help-circle

  • There are some multi-user aspects to LORD (Legend of the Red Dragon). You can trade and communicate with other players through turn based messages (like mail). Additionally you can attack other players that are not staying at an inn, or be attacked yourself by other players.

    Because its turn based, you can attack in your turns, and instantly see the outcome of the offline player. The computer plays their part in battle so you can choose to try to finish the battle or try to flee if you are getting your ass handed to you. As a defending player you’re not there for the battle so you log in you see the transcript of what happened along with your fate and that of the other named player. Its surprisingly exciting even reading it after the battle!


  • I read this story this morning and have been thinking back to it all day. This wasn’t just some idiot that was too stupid or young to not realize he was talking to a bot and did something like drink bleach because it told him to.

    This was one of us.

    He fit lots of behaviors I see here from me and my fellow Lemmy posters. He:

    • built computers for himself and family members
    • was a hobbyist (at least) coder
    • wasn’t a young kid that didn’t know the world. He was 48 or 49.
    • was an early adopter embracing the modern LLM technology in 2022 when it first really became public.
    • sold his house in an urban metropolis (Portland) and moved to a rural area so he could use his additional wordworking skills on building sustainable housing.
    • worked part time at a homeless shelter

    Doesn’t this guy sound like someone that would be a Lemmy poster to you too?

    He started using LLMs (ChatGPT specifically) as a tool only to advance his hobby and work. When he first started it appears he understood it was just a tool, and didn’t think it was something sentient. Only later after hundreds of hours of exposure did this idea arise in him.

    Was there some underlying psychological problem that the LLM exacerbated? Possibly. But at what level was his original underlying issue? Do we all have some low level condition that would make us equally susceptible? I know we’d like to think we don’t, but how do we know? This man certainly didn’t think he did, I’m sure.

    Next I think about what it would take for me to get down this bad path without realizing it. At one point would I be talking to a chat bot, not realize it, and let what that chat bot said change or influence my thoughts when I’d have zero knowledge of it being just a fancy program? I consider myself moderately smart with good critical thinking skills, but I’m sure this man did too.

    Then it occurred to me that I have to concede that I have, at some point, already interacted with a bot in years past on Reddit or even today on Lemmy and I had no idea it was a bot. Was that interaction a throwaway conversation about pop culture that would have no impact on my world view or was it a much deeper and important political or philosophical conversation that the bot introduced an idea or hallucinated evidence to support a point and I didn’t catch it to challenge it? Am I already a few or many steps down the bad path of falling for illusions of a bot? I certainly don’t think so, but neither did he.

    How many of us are already on the same path as this guy and just as ignorant about the danger as the man in the article?


  • Lots of folks here are making good recommendations. Don’t forget some of the OG MUDs like Legend of the Red Dragon. There are quite a few internet accessible BBSes still running the classic game.

    I like that it has an exhaustion component to the gameplay that only lets you do a few actions a day (that you can do in as short as 5 min if you want). This means you’ll never find yourself too deep in the game because you’ll have to wait until tomorrow for more turns. It also gives you something to look forward to the next day to see what happened in your absence between daily turns.


  • Geopolitically you’re cherry picking from a time when nations of the EU are not as powerful globally. When Germany was powerful, look how they treated the Poles. When Belgium was powerful look at it treated the people of Central Africa (Congo). Spain, at the height of its power, treated the Aztec and other nations in the Caribbean with zero respect.

    also because to be in EU it is a requirement to observe human rights. Disrespecting the rights of people even if they aren’t of your own nationality, is contrary to democratic values.

    That is part of the diplomatic veneer. Yes, its an ideal, but it will be discarded when geopolitically necessary. How many boats of migrants have drowned off the coast of Italy or Greece? Are diplomats and citizens of Israel still allowed free movement in the EU with its treatment of those in Gaza?

    Keep in mind, I’m not criticizing the EU. I recognize the really ugly realities that come with geopolitics and the choices that national leaders make to serve the interests of their citizens, even with it conflicts with their own ideals.

    You may be thinking China and Russia are just as bad or maybe even worse, but that isn’t the pattern you should be looking at, you should compare with other democracies, and especially countries that have better democracy than USA.

    Comparing “degrees of disrespect” is ignoring geopolitical realities. If you want to have a conversation about ideals humanity should adopt we will likely agree on most of the points of the discussion, but understand national leaders will (when push comes to shove) ignore all of it and do what they think is best for their nation no matter the cost to other nations.

    Also, none of this is a defense of the actions of China, Russia, or the USA. Its a recognition that powerful nations do these things when it serves their interests.



  • Hoping JTS that green cap and not the 4050 chip…

    That green cap I think is a mylar capacitor and will cost you maybe 5 cents at retail (and .00001 cents in bulk).

    That 4050 is also dirt cheap. Maybe 50 cents to $1 USD at retail. You’ll pay more in shipping costs than for the part. Today’s CMOS ICs are a bit more robust against static discharge than those made in the 1980s, but don’t risk it when you do the replacement. Make sure you use a grounding wrist strap or the like when you desolder the old 4050 and put in the new 4050, partially to protect the 4050 but really to protect that CPU which will probably cost you closer to $11-$20 (just a guess) to replace if it dies.



  • I’ve never worked on Atari consoles but you got me curious.

    I did a Google search for schematics, not surprising, found many variants. So I don’t know if this one is your board, but here’s the schematic for one with some of my colored markup:

    In working operation the Red arrow is apparently the “fire” button on the joystick and to activate the function, pressing the fire button ties Pin 6 to Pin 8 (blue arrow). Pin 6 is normally pulled down (to ground) by that circuit I have circled in dark red. Pin 8 has 5v+ generated by part I have circled in magenta. So pressing the button sends 5v+ first through that dark blue circled area which I think its doing some debouncing (cleaning up noise preventing accidental quick/up/down/up/down in the micro seconds of the fire button is pressed). If any of those capacitors or that diode is shorted, it would send 5v+ constantly “holding down” the fire button.

    Assuming all of that is fine, the next area I’d look at would be that dark red circled area. This is where the pull down to ground comes from making sure pin 6 is low and the fire button is “off” or “not pressed” if any of this is floating, it could show up as “not ground” and the main IC would think the button is pressed.

    Next would be the those 4050 ICs circled in green. These are CMOS buffers and CMOS ICs ARE EXTREMELY VULNERABLE TO STATIC DISCHARGE. Their job is just to take an input of some voltage and output a single clean digital signal of either 1 or 0. There is one buffer for each fire button (left and right joysticks).

    Finally the fire button output of that 4050 buffer is delivered in to the main CPU that A201 TIA PAL (my schematic may be from a European model).

    If you had this disassembled on a bench and had a voltmeter, you could get a good idea of where the problem is in about 10 minutes.







  • If you play with the parameters you can make all kinds of things happen, but all of those things are still driven by the existing information it already has or can find. It can mash things together in random new ways, but it will always work with components that already exist.

    Or purely randomness, but the spirit of your point is sound. And if it is randomness it may be unique output, but the utility of that result may be zero.

    There is no awareness of context or meaning that would allow it to make intelligent choices about what it mashes together. That will always be driven by the patterns it already knows, positively or negatively.

    100% AGREE. LLMs are not “thinking”. LLMs are NOT the HAL 9000 from the movie 2001: A space odyssey

    It’s like doing chemistry by picking random bottles from the shelf and dumping them into a beaker to see what happens. You could make an amazing discovery that way, but the chances of it happening are very, very low. And even if it does happen, there’s an excellent chance that you won’t recognize it.

    100% AGREE.

    I’m in favor of using LLMs for tasks that involve large-scale data analysis. They can be quite helpful, as long as the user understands their limitations and performs due diligence to validate the results.

    Unfortunately what we are mostly seeing are cases where LLMs are used to generate boilerplate text or code that is assembled from a vast collection of material that someone who actually knew what they were doing had previously created. That kind of reuse is not inherently bad, but it should not be confused with what competent writers or coders do. And if LLMs really do take over a lot of routine daily tasks from people, the pool of approaches to those tasks will stagnate, and eventually degenerate, as LLMs become the primary sources of each others’ solutions.

    100% agree. The degeneration is already occurring because bad LLM output is being fed back in as authoritative training data resulting in confidently wrong answers being presented as truth. Critical thinking seems to have become an endangered species in the last 20 years and I’m really worried that people are trusting LLM chatbots completely and never challenging the things they output but instead accepting them as fact (and acting on those wrong things!).

    LLMs may very well change the world, but not it in the ways most people expect. Companies that have invested heavily in them are pushing them as the solutions to the wrong problems.

    I think we have some of the pieces today that will make AI in general more trustworthy. Grounding can go part way to making today’s LLMs more trustworthy. If an LLM claims something as fact, it should be able to produce the citation that supports it (outside of LLM output). That source can then be evaluated critically. Today’s grounding doesn’t go far enough though. An LLM today will say “I got that from HERE” and simply give a document. It won’t show the line of text and supporting arguments that would arrive at its stated output. It can’t do these things today because I just described reasoning which is something an LLM is NOT capable of. So we wait for true AGI instead.


  • LLMs are not capable of creating anything, including code. They are enormous word-matching search engines that try to find and piece together the closest existing examples of what is being requested. If what you’re looking for is reasonably common, that may be useful.

    Just for common understanding, you’re making blanket statements about LLMs as though those statements apply to all LLMs. You’re not wrong if you’re generally speaking of the LLM models deployed for retail consumption like, as an example, ChatGPT. None of what I’m saying here is a defense about how these giant companies are using LLMs today. I’m just posting from a Data Science point of view on the technology itself.

    However, if you’re talking about the LLM technology, as in a Data Science view, your statements may not apply. The common hyperparameters for LLMs are to choose the most likely matches for the next token (like the ChatGPT example), but there’s nothing about the technology that requires that. In fact, you can set a model to specifically exclude the top result, or even choose the least likely result. What comes out when you set these hyperparameters is truly strange and looks like absolute garbage, but it is unique. The result is something that likely hasn’t existed before. I’m not saying this is a useful exercise. Its the most extreme version to illustrate the point. There’s also the “temperature” hyperparamter which introduces straight up randomness. If you crank this up, the model will start making selections with very wide weights resulting in pretty wild (and potentially useless) results.

    What many Data Scientists trying to make LLMs generate something truly new and unique is to balance these settings so that new useful combinations come out without it being absolute useless garbage.