I'm interested in playing with this, but I'm not interested in paying $3000 for a high-end nVidia NPU. It seems to be in principle possible to split the task into smaller subtasks that could be distributed to a Beowulf-style Pi cluster. Several enclosures exist that let you connect multiple Compute Modules over a high-speed bus, I hear.There were some projects that used multiple PIs. From a cost and
I have minimal experience with Raspberry Pi (not zero, but minimal). I have none with setting up Beowulf clusters, and none with decomposing machine learning tasks and distributing them among processors. Thus, I wonder if there might be an existing project I could learn from and maybe even eventually contribute to, even if only as a tester.
Thanks.
There were some projects that used multiple PIs. From a cost and
complexity standpoint they were more "because we can" rather than
practical. A multicore AMD or INTEL processor would be a better option.
If you skip the high power GPU the system cost is lower. Also the MPI
cluster code is "off the shelf" for those processors.
https://mpitutorial.com/tutorials/mpi-hello-world/
Sysop: | Weed Hopper |
---|---|
Location: | Clearwater, FL |
Users: | 12 |
Nodes: | 6 (0 / 6) |
Uptime: | 03:56:53 |
Calls: | 69 |
Files: | 50,166 |
D/L today: |
134 files (48,314K bytes) |
Messages: | 279,796 |