Large language models are capable of generating a great deal of objectionable content, therefore, there is a growing interest on aligning these models in an attempt to prevent undesirable generation.

Large language models have gained lots of interest both within academics as well as industry due to their capabilities to mimic human language accurately. As a result, the market for ChatBots is projected to grow by over $994 million. This sudden growth is also making security and societal threats — jailbreaking, phishing, cyberbullying, fake news, etc.

We are currently working on building algorithm(s) that would automatically detect machine-generated content. The goal is twofold: first, identification of machine-generated content; and second, flag it for its appropriateness. Currently, we are working on building algorithm(s) that would detect if an article (news, poem, abstract, email, etc) is written by a human or large language models (LLMs).

MUGC Tool

Check out our online MUGC tool .

https://staging.d6rx2p2mtku7l.amplifyapp.com