We leverage proprietary models and existing AI algorithms to scan through text uploads and determine the risk that they contain AI-generated content. Our models are only trained for English text. Currently, for file uploads, our risk scores sometimes only examine a portion of the overall document text. Detection accuracy may vary depending on the AI model that is used for generation of text. In addition, no model is perfectly accurate and even text that is 100% human made could be flagged as high risk. This is also the case for other AI detection tools. Risk scores are simply conversions of model probability outputs. Therefore, no decision should be made solely based upon any risk scores. Instead, they can be used to help identify how to best allocate resources towards searching for AI generated content. In summary, we offer a tool that converts model predictions on the probability of content having been generated by AI into risk scores, and we cannot guarantee the accuracy of these predictions. We look forward to adding new models and detection techniques, improving our accuracy. Please give us feedback!