pavunkissa

joined 1 year ago
[–] pavunkissa@sopuli.xyz 11 points 5 months ago (2 children)

The problem, I believe, is that stable diffusion presently only supports Python 3.10, but Arch ships 3.12, and some of the dependencies aren't compatible with the newer version. Here's what I did to get it working on Arch + AMD 7800XT GPU.

  1. Install python310 package from AUR
  2. Manually create the virtualenv for stable diffusion with python3.10 -m venv venv (in stable diffusion root directory)

This should be enough for the dependencies to install correctly. To get GPU acceleration to work, I also had to add this environment variable: HSA_OVERRIDE_GFX_VERSION=11.0.0 (Not sure if this is needed or if the value is same for 7900 XTX)

[–] pavunkissa@sopuli.xyz 2 points 6 months ago (1 children)

That makes it clearer, thank you. But is this new technology? I always assumed it was the norm. It's possible I'm misremembering, but when I visited Japan over 20 years ago, every house had an AC that could both heat and cool (a necessity since the houses were basically uninsulated and could get quite chilly in the winter.)

[–] pavunkissa@sopuli.xyz 3 points 6 months ago (3 children)

I might be a bit confused, but aren't all air conditioners heat pumps? What other mechanism is there?

[–] pavunkissa@sopuli.xyz 3 points 10 months ago

This was my experience as well as a developer trying to package an application as an appimage. Creating an appimage that works on your machine is easy. Creating one that actually works on other distros can be damn near impossible unless everything is statically linked and self contained in the first place. In contrast, flatpak's developer experience is much easier and if it runs, you can be pretty sure it runs elsewhere as well.

[–] pavunkissa@sopuli.xyz 4 points 11 months ago

If I recall, Enlightenment used to have a rather focal fan base at one time. The DE was a lot prettier than most of its contemporaries, and was relatively lightweight despite having animated effects and everything. I always thought EFL was one of the hidden gems of the Linux ecosystem that was left in GTKs and Qts shadow, but after reading the article (back when it was first published) I realized there was probably a good reason it never got popular. I thought the story was embellished, as thedailywtf articles typically are, with the "SPANK! SPANK! SPANK! Naughty programmer!" stuff, so I downloaded EFL source code and checked. OMG, it was a real error message. (Though I believe it has since been removed.)

The company in question using EFL was (probably) Samsung, who apparently still uses it as the native graphical toolkit for Tizen.

[–] pavunkissa@sopuli.xyz 3 points 1 year ago

That is a good point to emphasize. A downside of a CLA is that it adds a bit of bureaucracy and may deter some contributors. If the primary concern is whether a GPL licensed app is publishable on an App Store, an alternative is to add an app store exception clause to the license. (The GPL allows optional extra clauses to make the license more permissive.) Though this means that while your code can be incorporated to other GPL licensed applications, you can't take code from other GPL projects that don't have the same exception.

[–] pavunkissa@sopuli.xyz 18 points 1 year ago (4 children)

As others have already said, the prohibition of using the code in commercial applications would make the license not open source/free software (as defined by the Free Software Foundation and Open Source Initiative.)

These are some of the most commonly used licenses:

  • MIT - a very permissive license. Roughly says "do anything with this as long as you give attribution"
  • BSD - similar to MIT (note that there are multiple versions of the BSD license)
  • ASL2 - another permissive license. Major difference is that it also includes a patent grant clause. (Mini rant: I often hear that GPL3's patent clause is the reason big companies don't like it. Yet, ASL2 has the very same clause and it's Google's favored license.)
  • GPL - the most popular copyleft license (family). Requires derived works to be licensed under the same terms.
  • LGPL - a variant of the GPL that permits dynamic linking to differently licensed works. Mainly useful for libraries.
  • AGPL - a variant of GPL that specifies that making the software available over a network counts as distribution. (Works around the SaaS loophole. Mainly used for server applications.)
  • Mozilla - a hybrid permissive/copyleft license. I don't fully understand how this one works.

If you want to use a true FLOSS license and your goal is to discourage people from selling it, I'd say the GPL is your best bet. Legit vendors who don't want to give out their source code won't touch GPL code. The non-legit ones won't care no matter what license you choose. Also, iOS App Store terms are not compatible with the GPL so they can't release their stuff there, but you can as long as you hold full copyright to your application.

[–] pavunkissa@sopuli.xyz 1 points 1 year ago (1 children)

My impression about Matter was too that it is not “done” yet and device support is poor. On the other hand you read at every corner that it will be the future.

This is my impression as well. I'm keeping an eye on how this space develops and I'll probably buy a second dongle just for Thread when I need it (i.e. when some product I really want comes out that only supports Thread.) I believe most zigbee dongles are theoretically capable of supporting Thread, since they both share the same physical layer protocol.

I'm curious to hear people's experiences with Thread/Matter devices. Ideally, I'd like to use my HA box as the border router and configure it to not allow any external Internet connections. Will this break any functionality on devices with a Matter logo on them? Ideally it shouldn't, but given the track record of manufacturers so far, my expectations are low.

[–] pavunkissa@sopuli.xyz 7 points 1 year ago (3 children)

I use zigbee2mqtt myself and I've been very happy with it. I haven't tried ZHA, but I believe z2m supports more devices. (I use z2m's supported devices list to choose which ones to buy.) The downside is that it's a bit more work to set up initially, as you need an MQTT broker as well. But in return, I feel like z2m is more reliable since it runs (and is updated) separate from HA core. I use it with a zzh! dongle and even though I got one of the bad ones with a faulty amplifier chip, it's been rock solid.

As for Thread(+Matter), I'm waiting for things to settle down. Support in HA is still experimental and there are very few products out yet that use Thread. I'll probably prefer Zigbee for as long as they sell them so all my devices will share the same mesh. Also, unlike Zigbee, Thread devices are not guaranteed to be local-only, which is my biggest worry. Thread/Matter won't free us from having to check a device compatibility list before buying.

[–] pavunkissa@sopuli.xyz 4 points 1 year ago

This is my chief worry with Thread. Zigbee is guaranteed to be local only, but if they switch over to Thread, the individual bulbs will be able to call home, even if they expose some of their functionality locally via Matter. With home assistant, one can probably configure their Thread Border Router to not allow internet access, but I have a suspicion a lot of supposedly local thread/matter devices will be designed with the assumption that they have cloud access and won't function fully if firewalled.

[–] pavunkissa@sopuli.xyz 2 points 1 year ago

I didn't actually have problems with proxmox, other than the potential compatibility issue with Frigate. I didn't test it, but I had read that getting iGPU passthrough for video acceleration working can be tricky. A couple of things worked better: the ethernet adapter was more stable and the power button worked.

 

A few weeks ago I wrote about my experience migrating a HA installation from a Raspberry Pi to a NUC running proxmox. Since I can't help but to tinker, here's my experience transferring the installation to bare metal.

Reasons I had for making the switch:

  • There are HAOS add-ons for pretty much every extra service I'm interested in running right now
  • According to its documentation, Frigate works better when not run inside a VM (passing through the iGPU can be problematic)
  • Proxmox nags me about a subscription every time I log in to the admin console
  • Random crashes (that I misattributed to proxmox)
  • Potentially lower idle power consumption (made no difference, it turned out)

I started the migration by making a full backup again. I installed Puppy Linux on a USB stick, booted the NUC with it, downloaded the HA image and wiped the boot drive:

dd if=hass.img of=/dev/nvme0n0 status=progress

(Fun sidenote: writing an image from RAM to a NVME drive was so ridiculously fast, the USB stick felt like a floppy disk in comparison.) After rebooting, I had a fresh HA install once again. This time, I monitored the restoration progress by periodically checking the supervisor logs on the console (ha supervisor logs command.) Running supervisor stats showed CPU usage at around 50% (one core at 100%). The restore took roughly 16 minutes to complete.

I had some connection trouble after restoring but at a quick glance, everything seemed to work after a reboot. After a closer look, I noticed most of my addons were missing. I ran a partial restore of everything except HA core, which appeared to fail due to not being able to fetch add-on images.

Before, I had proxmox seemingly crash on me a couple of times. Actually, it was losing its network connection and needed a reboot to recover. I had thought this might be a problem with proxmox but turns out it was even worse when running baremetal HAOS! Every time the network cut out, I saw this message on the console:

rtl_rxtx_empty_cond == 0 (loop 42 delay 100)

There appears to be a bug in the RTL network chip or its driver. Intel doesn't list which chipset it uses in this NUC model, probably because they're ashamed of it. In proxmox the bug was triggered maybe once a week but in HAOS it was more like once every few minutes. Going back to proxmox wouldn't be an acceptable fix because I still couldn't trust the server to remain online.

I worked around the problem by running to a local computer shop and getting a USB ethernet adapter. Hopefully, a future kernel update will fix the issue and I no longer need to use it, but for now the USB adapter (with an AX88179 chip) has been working perfectly. After fixing the network problem, partial restore worked and all addons were reinstalled.

Finally, I wanted to add a second interface for my IoT VLAN. This was easy in proxmox, as I could simply add a second virtual adapter, but it can be done in plain HAOS just as easily. This feature doesn't seem to be mentioned in the documentation anywhere, but the ha command line tool can configure VLANs for you:

ha network vlan enp0s20f0u1 200 --ipv4-method auto --ipv6-method auto

This adds a new virtual interface to the physical interface enp0s20f0u1 for VLAN tag 200. (This can also be done using NetworkManager.)

Having HA on two subnets simultaneously has worked well so far. Traffic to my IoT devices no longer needs to go through the router and, in the future, setting up Matter devices on the IoT subnet ought to be possible as (to my understanding) they utilize link local IPv6 addresses.

Lastly, I got a PoE camera and added Frigate. Configuring it was bit of a chore and the documentation feels a bit fragmented, but I did get it working in a couple of hours. Some relevant notes:

  • OpenVINO detector seems to work well enough on the NUC. I currently have just one camera and feel no need to get a Coral accelerator
  • VAAPI acceleration for ffmpeg requires protected mode to be disabled ("full access" version of the add-on needed)
  • I used go2rtc to restream the detect stream, since that stream is also good for live view. It can be viewed from Home Assistant's UI, even through nabu casa.
  • Frigate-card supports casting locally! (figured out how this works: media_player.play_media action, content id is media-source://camera/CAMERA_ENTITY_ID, and content type is application/vnd.apple.mpegurl)
  • Having frigate continuously running doesn't have any measurable effect on the NUC's power consumption. Maybe there's something wrong with my power settings and it wasn't idling as much as it should have?
  • Using Frigate's person detection as an occupancy sensor works really well. This might actually replace a PIR sensor once I move the camera to its final location.

In the end, was it worth it moving from proxmox to bare metal? Maybe. One less moving part to worry about at least. It did not solve the random (network) crash issue, but I did figure out the root cause. There was no change in power consumption, the NUC still draws 10 watts (with or without Frigate running.) If I come up with something that needs to run in a VM I might go back, but I'm also planning on building a NAS in the near future which I could also use for running VMs and containers.

One problem still remains: the power button does nothing! I think hassos is missing acpid or its configuration. This is not a showstopper, but it would be nice to be able to reboot the system gracefully when/if it loses networking.

 

I just finished migrating my Home Assistant installation from a Raspberry pi to an Intel NUC and I thought I'd share my experience. All in all, it went well but there were a couple of pain points I'd like to have known in advance.

Here's what I did. First, I prepared the NUC for installation. Rather than going bare metal, I installed proxmox because I plan on running other stuff on it as well. Proxmox installation was very straightforward, but figuring out how to install HA on it wasn't, as I've never used proxmox before.

I first tried to use the HA image as a virtual installation medium, which did not work. I realized that, like with the RPi, it's not an installer but a ready to use disk image. I found a nice guide on how to install HA on proxmox with a handy helper script to set everything up for you.

Now I had a new HA instance running, ready for initial setup. Time for the switchover:

  1. I made a new full backup on the Raspberry Pi, then shut it down.
  2. I reassigned the Home Assistant IP address to the VM in my router's DHCP settings.
  3. I logged in to the new HA instance and uploaded the backup file using the restore from back option on the setup screen.

This is where HA still has a pain point. There is no progress bar or anything to let you know the state of the restoration process. It took quite a while until the web UI came back up (and I'm not sure which log file I should have been monitoring in the console) and once it finally did, the add-ons were stuck in a weird state where some of them appeared to be running but were still shown as stopped. HA core was operational already, including all Wifi based integrations. Zigbee2mqtt wasn't up yet because I hadn't yet passed through the zigbee stick.

After I had grown tired of waiting, I rebooted the VM and now the add-ons started up properly. All the settings were migrated, including Mosquitto's state. Very nice!

The last things to fix were:

  • Pass through the zigbee USB stick. I did this from the proxmox VM's hardware tab: Add USB device, use USB vendor/device ID and selected the one that said serial port. Zigbee2mqtt started working after doing this.
  • Pass through bluetooth. The NUC's built-in bluetooth adapter was also visible as a USB device. There were only two devices in proxmox's USB device dropdown: the zigbee stick and an unnamed device. The unnamed one was the bluetooth adapter. In HA devices, I removed the old RPI bluetooth device and added the new one and immediately started receiving updates from my Ruuvi Tags.
  • Deleted RPi power supply check device

All in all, a fairly smooth migration, with the only bump on the road being the lack of progress reporting when restoring from backup. Would recommend. The NUC (A NUC11 with a Celeron N4505 processor) plus memory and NVME drive was only about twice as expensive as a RPi4 with an SD card but is a lot more powerful with a similar idle power consumption of around 6W.

view more: next ›