According to physical laws as we understand them, the only way to generate fundamentally random numbers is to use quantum processes. However, confirming that the outputs from a real physical device are truly quantum is a non-trivial task that requires understanding the device operation in detail. In this project we aim to do this and hence to independently assess the quality of quantum random number generators. Such assessment is an important step for improving the marketability of QRNGs as well as enabling their use in other devices, particularly ones with sensitive applications. In general, QRNGs can be based on very different types of quantum process in different systems. We have chosen to work with optical-based QRNGs where there will be some commonality between approaches, so that aspects from one device can be applied to another. For each device we will produce and refine a theoretical model that will have its accuracy assessed experimentally. Based on each device model we will develop a formula for the min-entropy of the device outputs as a function of the parameters in the model. Min-entropy is the relevant quantity for assessing the amount of high-quality randomness that can be extracted from a lower quality source, and the idea here is that the raw randomness from the device will need to be run through a randomness extractor to counteract the effect of classical noise and other hardware manufacturing imperfections. The functional relationship of the model parameters to the extractable randomness can be used to understand the significance of various device components to the process. For the most significant parameters careful measurements will be needed to give precise experimental characterisation, while for others rough measurements may suffice. Having such a function may point to components for which small improvements in their quality leads to large increases in the output randomness. This can be fed back to the manufacturers in order that they can improve their QRNGs in the future. One way to test RNGs is via statistical testing of their output sequences. While this is not sufficient to confirm correct operation, it is a useful way to diagnose badly performing devices. We think it reasonable that such statistical tests be part of any QRNG, since a device that has passed the assurance process may later become damaged or cease to function correctly. Appropriate statistical testing may pick this up quickly enough to stop or minimise any damage the biased output may cause on applications that depend on its randomness. We will hence develop new statistical testing procedures for testing the quality of a stream of bits on-the-fly, so that malfunctions can be quickly identified and possibly diagnosed. We note that the classical layers that surround the quantum core can also compromise the quality of the output. While some of the statistical testing will screen for problems in these layers, testing of these aspects will be limited in this project. The outputs of the project will be: theoretical models of each device methods to calculate the amount of extractable randomness for each device experimental determination of the relevant parameters of each device and hence quantification of the amount of randomness they can output identification of suitable extractors to turn the raw randomness into high-quality randomness new statistical tests as diagnostics of badly performing devices input into the drafting of test standards documentation describing the process that can be used to test QRNGs