KingRandomGuy

joined 1 year ago
[–] KingRandomGuy@lemmy.world 2 points 3 days ago (1 children)

Sony v Connectix is the actual case that set the precedent for emulation, not Bleem. The Bleem case decided whether or not the use of screenshots of copyrighted games to advertise their emulator was legal. I believe it just deferred to the Connectix case for the legality of the emulator.

[–] KingRandomGuy@lemmy.world 2 points 4 days ago

I haven't built the OAT but I have built the larger variant (OpenAstroMount). OAM is pretty well designed in my opinion, with the biggest weak point being the connection to the tripod. In my case I machined a replacement for the printed part.

I believe OAT should be more than enough for light setup, but you have to make sure you have a good place to set it up. It isn't designed to sit on a tripod. They have a new model in the works (OpenAstroExplorer) which I believe is designed for tripod use without being as massive as the OAM.

You can bring the cost down by purchasing components for cheap on AliExpress. That's what I did for my OAM. Take advantage of their sale events too. You'll want to avoid cheaping out too much on certain motion components (namely pulleys and belts, possibly also bearings). For aluminum extrusions, a local supplier might be cheaper, especially if you can buy a long section and cut it yourself.

You could also look at the OG Star Tracker for a cheaper build, though I believe that doesn't support GoTo and may not have as good performance.

[–] KingRandomGuy@lemmy.world 7 points 2 weeks ago

Yeah we used to joke that if you wanted to sell a car with high-resolution LiDAR, the LiDAR sensor would cost as much as the car. I think others in this thread are conflating the price of other forms of LiDAR (usually sparse and low resolution, like that on 3D printers) with that of dense, high resolution LiDAR. However, the cost has definitely still come down.

I agree that perception models aren't great at this task yet. IMO monodepth never produces reliable 3D point clouds, even though the depth maps and metrics look reasonable. MVS does better but is still prone to errors. I do wonder if any companies are considering depth completion with sparse LiDAR instead. The papers I've seen on this topic usually produce much more convincing pointclouds.

[–] KingRandomGuy@lemmy.world 3 points 2 weeks ago

I think it's been about a year? IIRC Intel only started using TSMC for their processors with Meteor Lake, which was released in late 2023.

I believe their discrete GPUs have been manufactured at TSMC for longer than that, though.

[–] KingRandomGuy@lemmy.world 2 points 3 weeks ago

I use a lot of AI/DL-based tools in my personal life and hobbies. As a photographer, DL-based denoising means I can get better photos, especially in low light. DL-based deconvolution tools help to sharpen my astrophotos as well. The deep learning based subject tracking on my camera also helps me get more in focus shots of wildlife. As a birder, tools like Merlin BirdID's audio recognition and image classification methods are helpful when I encounter a bird I don't yet know how to identify.

I don't typically use GenAI (LLMs, diffusion models) in my personal life, but Microsoft Copilot does help me write visualization scripts for my research. I can never remember the right methods for visualization libraries in Python, and Copilot/ChatGPT do a pretty good job at that.

[–] KingRandomGuy@lemmy.world 1 points 3 weeks ago

There is no "artificial intelligence" so there are no use cases. None of the examples in this thread show any actual intelligence.

There certainly is (narrow) artificial intelligence. The examples in this thread are almost all deep learning models, which fall under ML, which in turn falls under the field of AI. They're all artificial intelligence approaches, even if they aren't artificial general intelligence, which more closely aligns with what a layperson thinks of when they say AI.

The problem with your characterization (showing "actual intelligence") is that it's super subjective. Historically, being able to play Go and to a lesser extent Chess at a professional level was considered to require intelligence. Now that algorithms can play these games, folks (even those in the field) no longer think they require intelligence and shift the goal posts. The same was said about many CV tasks like classification and segmentation until modern methods became very accurate.

[–] KingRandomGuy@lemmy.world 7 points 3 weeks ago

I work in CV and a lot of labs I've worked with use consumer cards for workstations. If you don't need the full 40+GB of VRAM you save a ton of money compared to the datacenter or workstation cards. A 4090 is approximately $1600 compared to $5000+ for an equivalently performing L40 (though with half the VRAM, obviously). The x090 series cards may be overpriced for gaming but they're actually excellent in terms of bang per buck in comparison to the alternatives for DL tasks.

AI has certainly produced revenue streams. Don't forget AI is not just generative AI. The computer vision in high end digital cameras is all deep learning based and gets people to buy the latest cameras, for an example.

[–] KingRandomGuy@lemmy.world 2 points 3 weeks ago

Yeah there's a good chance you're right. Maybe something to do with memory management as well.

Long term I'll probably end up switching back to Darktable. I used it before and honestly it is quite good, but I currently have a free license for CC from my university and the AI denoise features in LR are pretty nice compared to the classical profiled denoise from Darktable. It does also help that the drivers for my SD card reader are less finicky on Windows so it's easier for me to quickly copy over images from my camera on there instead of Linux. Hopefully that also gets better over time!

[–] KingRandomGuy@lemmy.world 2 points 3 weeks ago

I don't know exactly, but it's apparently a thing. Some game anti-cheat software such as Easy Anti-Cheat will give you an error message saying something along the lines of "Virtual machines are not supported." Some are easy to bypass by just tweaking your VM config, others not so much.

[–] KingRandomGuy@lemmy.world 3 points 3 weeks ago (2 children)

Fair enough! I think it's more common for games to do that, but sometimes I had trouble with software on Windows that used virtualization elements themself. I probably just didn't properly configure HyperV settings, but I know nested virtualization can be tricky.

For me it's also because I'm on a laptop, and my Windows VM relies on me passing through an external GPU over TB3 but my laptops' dedicated GPU has no connection to a display, so it would be tricky to try and do GPU passthrough on the VM if I were on the go. I like being able to boot Windows on the go to edit photos in Lightroom, for example, but otherwise I'd prefer to run the Linux host and use the Windows VM only as needed.

[–] KingRandomGuy@lemmy.world 3 points 3 weeks ago (7 children)

I'm a fan of dual booting AND using a passthrough VM. It's easiest to set up if your machine has two NVMe slots and you put each OS on its own drive. This way you can pass the Windows NVMe through to the VM directly.

The advantage of this configuration is that you get the convenience of not needing to reboot to run some Windows specific software, but if you need to run software that doesn't play nice with virtualization (maybe a program has too large a performance hit with virtualization, or software you want to run doesn't support virtualized systems, like some anticheat-enabled games), you can always reboot to your same Windows installation directly.

[–] KingRandomGuy@lemmy.world 1 points 4 weeks ago

GPU and overall firmware support is always better on x86 systems, so makes sense that you switched to that for your application. Performance is also usually better if you don't explicitly need low power. In my use case I use the Orange Pi 5 Plus for running an astrophotography rig, so I needed something that was low power, could run Linux easily, had USB 3, reasonable single core performance, and preferably had the possibility of an upgradable A key WiFi card and a full speed NVMe E key slot for storage (preferably PCIe 3.0x4 or better). Having hardware serial ports was a plus too. x86 boxes would've been preferable but a lot of the cheaper stuff are older Intel mini PCs which have pretty poor battery life, and the newer power efficient stuff (N100 based) is more expensive and the cheaper ones I found tended to have onboard soldered WiFi cards unfortunately. Accordingly the Orange Pi 5 Plus ended up being my cheapest option that ticked all my boxes. If only software support was as good as x86!

Interesting to hear about the NPU. I work in CV and I've wondered how usable the NPU was. How did you integrate deep learning models with it? I presume there's some conversion from runtime frameworks like ONNX to the NPU's toolkit, but I'd love to learn more.

I'm also aware that Collabora has gotten the NPU drivers upstreamed, but I don't know how NPUs are traditionally interfaced with on Linux.

 

Equipment details:

  • Mount: OpenAstroMount by OpenAstroTech
  • Lens: Sony 200-600 @ 600mm f/7.1
  • Camera: Sony A7R III
  • Guidescope: OpenAstroGuider (50mm, fl=163) by OpenAstroTech
  • Guide Camera: SVBONY SV305m Pro
  • Imaging Computer: ROCKPro64 running INDIGO server

Acquisition & Processing:

  • Imaged and Guided/Dithered in Ain Imager
  • 420x30s lights, 40 darks, 100 flats, 100 biases, 100 dark-flats over two nights
  • Prepared data and stacked in SiriLic
  • Background extraction, photometric color calibration, generalized hyperbolic stretch transform, and StarNet++ in SiriLic
  • Adjusted curves, enhanced saturation of the nebula and recombined with star mask in GIMP, desaturated and denoised background

This is my first time doing a multi-night image, and my first time using SiriLic to configure a Siril script. Any tips there would be helpful. Suggestions for improvement or any other form of constructive criticism are welcome!

36
submitted 1 year ago* (last edited 1 year ago) by KingRandomGuy@lemmy.world to c/astrophotography@lemmy.world
 

Equipment details:

  • Mount: OpenAstroMount by OpenAstroTech
  • Lens: Sony 200-600 @ 600mm f/7.1
  • Camera: Sony A7R III
  • Guidescope: OpenAstroGuider (50mm, fl=153) by OpenAstroTech
  • Guide Camera: SVBONY SV305m Pro
  • Imaging Computer: ROCKPro64 running INDIGO server

Acquisition & Processing:

  • Imaged and Guided/Dithered in Ain Imager
  • 360x30s lights, 30 darks, 30 flats, 30 biases
  • Stacked in Siril, background extraction, photometric color calibration, generalized hyperbolic stretch transform, and StarNet++
  • Enhanced saturation of the galaxy and recombined with star mask in GIMP, desaturated and denoised background

Suggestions for improvement or any other form of constructive criticism welcome!

view more: next ›