![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
APU board? They are going EOL soon, but these devices are built like a tank. Full Linux x86_64 support, coreboot bios. https://www.pcengines.ch/apu.htm A few sellers in the EU still have them.
stay a while and dwell in the fediverse or are you afraid you might enjoy it?
APU board? They are going EOL soon, but these devices are built like a tank. Full Linux x86_64 support, coreboot bios. https://www.pcengines.ch/apu.htm A few sellers in the EU still have them.
In case you get stuck again and need more games:
Notable mentions: WorldOfGoo, Human Resource Machine
Reminds me of the beginning from the novel “The Swarm” by Frank Schätzing…
SSDs are not really good for long lasting backups. They hold data by electric charge, if you unplug your SSD and store it, then it might loose its data after just a couple of years. HDD “spinning rust” still has its merits when it comes to long term data storage, they hold their magnetic data longer without fresh power.
While being an environment issue, the plastic wrappings have a practical purpose: protect food from roaches. In many japanese cities you cannot have food open without attracting gokiburi within a few hours. This is also why the japanese keep everything as clean as possible. Even in the shadiest places there is someone with a vaccuum and a stickytape floor roller(!) to prevent the smallest crumb from staying on the floor too long. Eating on the move in the streets is frowned upon, because fallen down crumbs attract roaches. Public trashcans are rare, because - you guessed it - roaches. You are expected to carry any trash back home and put it in a sealed bag in your trashbin. The typical size of japanese houses and flats does not offer much space for storing large food containers, so you buy your food in small portions.
Of course a more environment-friendly wrapping would be better, but it has to be able to withstand a roach nibbling on it, which is not the case for various organic-based polymers.
I first tried it a few days ago, I’m still a bit lost. Inpainting, which is the major part of my workflow, feels not as swift as in automatic1111 and I’m still searching for the only-masked-area inpainting in ComfyUI.
But I can confirm it is much faster and uses less VRAM. And I somehow love the ability to save the entire workflow into a json. I’m missing my prompt-autocomplete plugin the most.
Do not expect you can offer this service for a competive price against cloud prices. Caring for a company IT system is a big challenge and requires more work the more users there are.
For a company this size: make a clear contract. Consider how much time you need for setup/installation, monthly hours for maintenance, monitoring and at least daily(!) backups. Let them choose if they want it with a failover and charge for the required hours and material. Also put in the contract when they can expect support from you, including a clause for a holiday substitute admin (if needed). Then put a pricetag on support hours for holding people’s hands when they “can’t find that file they uploaded a week ago and it is surely a server issue” and put a pricetag on engineering hours for any modifications they might want, like installing any plugins they deem useful for themselves. Hardware prices, traffic, rack space and power should be included as well. Have a good plan for updates, choose your distro wisely, do not rely on autoupdates.
Play all this through in your head, add up the hours, choose a fair rate and then you have your pricetag.
Cloud will always be cheaper, because they have their infrastructure already deployed. Building from the ground up is more expensive, but I think it is worth it. Will they?
Yes, I tested it and although it works in its current state it takes 2-3 hours per picture on Pi and 20 minutes per picture on my Desktop CPU.
But…isn’t unsupervised backfeeding the same as simply overtraining the same dataset? We already know overtraining causes broken models.
Besides, the next AI models will be fed with the interactions from humans with AI, not just it’s own content. ChatGPT already works like this, it learns with every interaction, every chat.
And the generative image models will be fed with AI-assisted images where humans will have fixed flaws like anatomy (the famous hands) or other glitches.
So as interesting as this is, as long as humans interact with AI the hybrid output used for training will contain enough new “input” to keep the models on track. There are already refined image generators trained with their own but human-assisted output that are better than their predecessor.
GoogleTalk once federated with XMPP/jabber, good times until their userbase was big enough to deferedate again, crippling the jabber network. It will happen again if we let it.
Metas plan is to draw users into their network and use the fediverse as an initial catalyst (“look! so much content already there!”). Once their userbase is large enough, they will deferate again claiming protocol difficulties or something equally vague, but they will just want to start rolling out advertising which would not be displayed to users from other instances. Most users will not keep two accounts and jusy stay with the big corp and leave the original fediverse again.
Yes, that should work. Check out stable-diffusion-webui (automatic1111) and text-generation-webui (oobabooga). And grab the models from civitai (stable diffusion) and huggingface (llms like llama, vicuna, gpt-j, wizard, etc.).
Check out Stable Diffusion and the llama model family. You can run those offline on your local hardware and wont have to worry about sharing private details with some cloud service that openly says they will look at your discussions and data and use it for training.
The short tinnitus that lasts just a few minutes is relatively common. Most common cause is stress and circulation issues. There seems to be no alternative name for the short tinnitus to differentiate between the permanent ringing.
I found that if it starts ringing in my ear due to stress or just spacing out during overthinking stuff, hyperventilating (increase blood oxygen levels) briefly and massaging my ear canal (increase circulation) from the outside helps to get rid of it more quickly. Maybe this helps somebody someday.
For composition I use Semantic Segmentation ControlNet - Sketch loosely, then inpaint or more ControlNet. Of course I use GIMP or any other tool to finetune the image or to “force the model’s hand” a little during inpainting.
I still like reading manga on the kindle paperwhite, the eink display is much more easy on the eyes and the weight and battery life are far better than any full blown tablet. Calibre can easily transfer/encode the comics to it, so no proprietary software needed.
i never knew! fantastic!
You mean the step after the ludicrous amount of inpainting one has to do sometimes?
Apart from the mentioned adddetail lora (works in negative prompt as well), maybe rerunning the image through ultimate sd upscaler with the controlnet extension? (go easy on the denoise level here, or your image becomes a surrealist’s dream)
Pandora’s box has been opened. AI will not go away and any attempt to enforce regulation to it will only harm the public and open source development while big corporations will just train their models off-shore in secrecy.
Society has to adapt to this new technology that is altering our every lives. We did this before, we will do it again. The only thing we must watch out for is for AI to become only available to big corporations; no company (and preferably no government) must be allowed to have sole reign over such a powerful technology. If everybody has access, then everybody will know what to watch out for when they see it.
Do not fear technology, fear those who do not want to share it.
still no word on how to convert/train other finetuned models into their format :(