Qointum Forum

Quantum-Secure Trustless Smart Money powering Web 3.0

  • You are not logged in.

#1 Nov. 19, 2018 09:29:40

Registered: 2014-09-10
Posts: 3
Reputation: +  0  -

Initial Release - Raycoin v1.0

Raycoin-1.0-setup.exe (dev key / sig / VirusTotal Clean) (source)

GUI Wallet: electrum-ray-3.2.3-setup.exe (sig / VirusTotal Clean) (source / server)

Raycoin is a vanilla Bitcoin derivative with PoRT (Proof of Ray Tracing) for ASIC resistance.




- Latest Windows 10 October update
- Nvidia RTX 2080/70/60 GPU (Turing)

Max Supply: 21,000,000 RAY
Block Time: 10 minutes
Block Reward: 50 RAY
Founder's Reward: 9.9% for the first 4 years (4.95% of the max supply)
(goes towards the further development of Raycoin)

- to get started mining run the Raycoin - mine shortcut on your desktop / start menu
- mining doesn't quite run at full speed so that your desktop can be more usable, to run at full speed or run slower in the background use the other mining shortcuts in your start menu
- by default mining uses addresses generated from Raycoin's command-line wallet which is not easy to use, to mine to an address from your electrum wallet edit MINING_ADDRESS with the settings shortcut, also any existing rewards can be exported to your electrum wallet using the export shortcut
- to mine with multiple GPUs edit GPU_COUNT with the settings shortcut

Raycoin Viewer:
- Raycoin Viewer is a ray trace hash visualizer, mining sandbox, and mining log viewer for your successful hashes and daily best hashes
- to visualize the genesis hash, copy and rename genesis-raytraces.log to <user>/AppData/Roaming/Raycoin/raytraces.log and run Raycoin Viewer
- supports HDR when enabled in windows display settings
- supports game controllers ('A' button acts like the SHIFT key)
- on multi-GPU systems you can select which GPU to use with -gpu <number>, ex. -gpu 1, default is 0
- you can specify the data directory with -datadir <path>, ex. -datadir C:\\Temp\\Raycoin

Raycoin Command-Line:
- to mine manually open Raycoin Console or shift-right-click in the Raycoin folder and select open powershell window here and run the command: ./raycoin-cli generate
- to stop mining: ./raycoin-cli stopgenerate (first press CTRL+C to take back control of the shell window)
- to allow your desktop to be more usable while mining try sleeping momentarily: ./raycoin-cli generate 0 1000 (or mine slowly: ./raycoin-cli generate 0 100000)
- for more options see: ./raycoin-cli help generate
- on multi-GPU systems you can select which GPU to use with -gpu=<number>, ex. -gpu=1, default is 0

The original idea of Bitcoin is that it's decentralized because the hashing could be done at home on consumer-available hardware, then ASICs came along making the home computer useless and threatening to centralize hashing power with privately held chip designs. Ray tracing is a rendering method that is going to become more dominant as the game industry shifts over to it from “rasterization”, the current method. If the hash for your cryptocurrency is based off ray tracing hardware (RTX) then that goes a way towards ensuring the best hashing chips will always be in the hands of gamers / consumers, instead of companies like Bitmain who just accumulate wealth and have direct and private access to the top silicon foundries like TSMC.

There have been attempts to circumvent this issue like the “Dagger” proof-of-work used in Ethereum, but now even for Ethereum there are ASICs on the market and future designs that are claimed to be nearly 10x faster/cheaper than consumer available GPUs. This is because Dagger, a simplistic hash focused on memory usage, does not properly stress the GPU's logic/cache/memory layout and capabilities.

Description of the Ray Tracing Proof-of-Work:

- rays are cast out into a randomized field (seeded from the block) of faceted spheres which reflect the rays chaotically (like a disco ball)
- each sphere has a random label (4 bytes), when a ray hits a sphere its label is concatenated to the ray's string
- 32 such labels in each ray's string are hashed together using blake2s on the GPU
- if the hash is less than the target then a block is found, and the XY screen coordinate of the successful ray is stored in the high 20 bits of the nonce for verification
- there is one additional constraint: the ray must travel a certain depth into the field, after which the motion is deemed chaotic enough (over 99% of rays pass this test)
- rays that exit the field wrap around to the origin with a small perturbation to their orientation

In a full mining frame (1024x1024 RTs) Ray Tracing consumes 94% of the frame time (~40ms) with hashing just 4%, so this Proof-of-Work is dominated by Ray Tracing and thus ASIC-resistant. You can verify this yourself in the viewer by disabling hashing and enabling the profiler in the Engine Tuning menu.

Verification of a block hash requires only a single Ray Trace (1 RT) and takes about 1ms on my 2080 Ti, with the vast majority of that time spent in the GPU randomizing the field of spheres (not ray tracing). So verifying a blockchain as large as Bitcoin's would take only ~10 minutes.

ASIC Resistance:
One question that remains then is how difficult would it be to create a Ray Tracing ASIC?

1. For 100% accuracy, the biggest hurdle that the ASIC designers face is that they must reverse engineer RTX and use the exact same hierarchical building / traversal and ray-triangle intersection algorithm, capturing the ordering and the myriad of edge cases. If the ASIC design differs by even a single logic gate then some hashes will not be reproducible. As an example of the complexity of this, Microsoft's Fallback Layer for non-RT GPUs is about 10000 lines of code vs. a hash which is typically only 100 lines – a ray tracer is significantly more complex than a hasher.

2. A likely strategy however would be to design a 99% accurate ray tracer that does not capture the ordering and edge cases and then rely on a low-end RTX chip for hash verification – after 32 hits, such a tracer would still be correct 70% of the time. To combat this, Raycoin will be “memory hardened” by using a random generator to perturb the millions of vertices so that the optimal ray tracing strategy is to store them as the ray can strike anywhere, consuming multiple GB of expensive memory and ensuring a 10x upper limit on cost/performance efficiency (this will be finalized once the low-end RTX chips memory limits are known). Furthermore, it may prove difficult to achieve that 10x efficiency over consumer RTX which is purpose-built for ray tracing, especially going forward as it becomes the dominant rendering method and we get more than 1 RT core per SM (streaming multiprocessor).

3. Which of the few high-end foundries would agree to manufacture such a chip that steps all over Nvidia's Ray Tracing IP and patents?

PoRT Visualizer:
Here you can see the labels and the hit count, the ray stops at hit #32:

The Engine Tuning menu can be used to customize the viewer appearance, here the specular and ray intensities have been cranked up so that the scene sparkles:

The visuals can be confusing so here's some pointers:
- the large arrow (yellow or red) is the ray starting point and direction
- the smaller arrows show the ray after it has exited the field and wrapped back around to the starting point to continue tracing
- if you fly outside of the field, you can see what the ray sees when it wraps around, turn off Shade Last Miss Only in the Engine Tuning menu to make this background more apparent

Future Work:
Raycoin utilizes only the RT cores to ward off ASICs, but assuming a ray tracing ASIC could eventually be designed (and patent infringements avoided) the next step would be to extend PoRT to utilize the whole SM (streaming multiprocessor) via a randomly generated shader that operates on each ray's unique string of labels. This can be done by including the shader compiler with Raycoin and using the hash of each block as a seed to generate HLSL code. The code could not be completely random as it must adhere to SM strengths, executing coherently among groups of rays in a fixed time while preserving the randomness of the string, but having semi-arbitrary shaders compiled as a proof of work step would be fundamental towards mitigating significant (4x or more) ASIC efficiencies, forcing them to look more like the general purpose SM. This changing code would be entering territory dominated by FPGAs, but they are complex to design, expensive commercially, and would be slower to synthesize their logic gates compared to the compiler. These shaders would perform random floating point / integer math with reads/writes across the register file and deep data dependencies, cache and texture reads, warp-level (inter-thread) operations, coherent and well-predicted flow control, all tuned to typical GPU pipelines and parameters. The amount of work done by this shader on each ray would be determined by the size of the whole SM relative to the RT core, ensuring that an ASIC could not have a more optimal density of components.

Edited qarterd (April 14, 2019 11:53:58)


Board footer

Moderator control

Powered by DjangoBB

Lo-Fi Version