1 # Notes and Links
 2 
 3 ----
 4 * [Contents](hat-00.md)
 5 * Build Babylon and HAT
 6     * [Quick Install](hat-01-quick-install.md)
 7     * [Building Babylon with jtreg](hat-01-02-building-babylon.md)
 8     * [Building HAT with jtreg](hat-01-03-building-hat.md)
 9         * [Enabling the NVIDIA CUDA Backend](hat-01-05-building-hat-for-cuda.md)
10 * [Testing Framework](hat-02-testing-framework.md)
11 * [Running Examples](hat-03-examples.md)
12 * [HAT Programming Model](hat-03-programming-model.md)
13 * Interface Mapping
14     * [Interface Mapping Overview](hat-04-01-interface-mapping.md)
15     * [Cascade Interface Mapping](hat-04-02-cascade-interface-mapping.md)
16 * Development
17     * [Project Layout](hat-01-01-project-layout.md)
18 * Implementation Details
19     * [Walkthrough Of Accelerator.compute()](hat-accelerator-compute.md)
20     * [How we minimize buffer transfers](hat-minimizing-buffer-transfers.md)
21 * [Running HAT with Docker on NVIDIA GPUs](hat-07-docker-build-nvidia.md)
22 ---
23 
24 # Notes and Links
25 
26 ### Deep Learning
27 * [Amazons Deep Learning Java](http://djl.ai/)
28 
29 ### Manchester University TornadoVM
30 * [Tornado VM](https://github.com/beehive-lab/TornadoVM)
31 * [SPIRV Toolkit]()https://github.com/beehive-lab/beehive-spirv-toolkit
32 
33 ### Other JAVA GPU
34 * [Aparapi](https://github.com/Syncleus/aparapi)
35 
36 ### Github repos
37 * [Babylon OpenJDK](https://github.com/openjdk/babylon)
38 * [JTREG](https://github.com/openjdk/jtreg)
39 * [jexctract](https://github.com/openjdk/jextract)
40 
41 ### General GPGPU
42 * [A nice GPGPU bufferState of play video](https://www.youtube.com/watch?v=48AdJgTYSFQ)
43 * [Interesting project ](https://www.phoronix.com/news/SCALE-CUDA-Apps-For-AMD-GPUs)
44 * [More ](https://www.phoronix.com/review/radeon-cuda-zluda)
45 * [More](https://scale-lang.com/posts/2024-07-12-release-announcement)
46 * [Scale](https://docs.scale-lang.com/)
47 
48 ### Blogs and Articles
49 * [Inside NVIDIA GPUs: Anatomy of high performance matmul kernels](https://www.aleksagordic.com/blog/matmul)
50 * [How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance: a Worklog](https://siboehm.com/articles/22/CUDA-MMM)