GF Code Scanner [Local + Ai]

Security check for GreasyFork scripts (Local scan + AI in Russian)

You will need to install an extension such as Tampermonkey, Greasemonkey or Violentmonkey to install this script.

You will need to install an extension such as Tampermonkey to install this script.

You will need to install an extension such as Tampermonkey or Violentmonkey to install this script.

You will need to install an extension such as Tampermonkey or Userscripts to install this script.

You will need to install an extension such as Tampermonkey to install this script.

You will need to install a user script manager extension to install this script.

(I already have a user script manager, let me install it!)

You will need to install an extension such as Stylus to install this style.

You will need to install an extension such as Stylus to install this style.

You will need to install an extension such as Stylus to install this style.

You will need to install a user style manager extension to install this style.

You will need to install a user style manager extension to install this style.

You will need to install a user style manager extension to install this style.

(I already have a user style manager, let me install it!)

Author
ktt21
Daily installs
2
Total installs
2
Ratings
1 0 0
Version
1.0.3
Created
2026-03-23
Updated
2026-03-23
Size
12.4 KB
License
MIT
Applies to

Purpose: This script is designed to help people with little programming or code analysis experience quickly assess the level of risk posed by a script they are installing from the GreasyFork website. When you go to the "CODE" tab for any script on the GreasyFork website, an automatic "static analysis" of the code is triggered. There is a feature This script performs a static analysis of the code (without running it), checking the source text for dangerous patterns, functions, and access to sensitive data. Here are the main groups of parameters it checks locally:

  1. Critical threats (Code execution and mining) eval(): Direct execution of arbitrary code from a string. The most dangerous pattern. new Function(): Dynamic creation of functions (similar to eval). Miners: Search for cryptominer signatures (coinhive, cryptonight, monero, minero.js).
  2. Code hiding and obfuscation atob(): Decoding strings from Base64 (often used to hide malicious payloads). Hexadecimal strings: Patterns such as _0xabc123 or String.fromCharCode, characteristic of obfuscated (obfuscated) code.
  3. Device access and privacy Geolocation: navigator.geolocation (GPS access). Media devices: navigator.mediaDevices (access to webcam and microphone). Clipboard: navigator.clipboard (reading/writing to the clipboard).
  4. Working with user data Cookies: document.cookie (session hijacking or tracking). Browser storage: localStorage, sessionStorage (storing data on the user’s computer). Script manager storage: GM_getValue, GM_setValue (access to the extension’s persistent data).
  5. Network Activity Hidden requests: GM_xmlhttpRequest (allows requests to bypass the browser’s CORS policies). Standard requests: fetch(), XMLHttpRequest (loading data from external servers). Page rewriting: document.write() (can be used to tamper with website content).
  6. Domain analysis The script also extracts all URLs from the code, identifies unique domain names, and displays them in the report so you can see where the script might be sending data. Summary: If the script finds matches, it marks them with a threat level (Low, Medium, High, Critical) and shows the specific lines of code where the threat was found.

Similar to the "GreasyFork Code Safety Scanner" script, when you click the "Check in AI" (You must enter the API key for the AI engine you are using once.) button and select the AI engine you want:

  • Groq - verified
  • DeepSeek - verified
  • Grok - cannot be verified, but functionality is preserved (you can customize this)
  • Gemini - verified
  • Qwen - cannot be verified, but functionality is preserved (you can customize this)
  • ChatGPT - cannot be tested, but functionality is preserved (you can customize it) sends the statistical remarks found in the locally verified code to the AI and receives a response regarding these specific remarks in terms of their security or false positives. Can be used as an additional method of code verification.

Note 1: I would appreciate it if you could suggest how to modify the code lines for other AI systems or add additional checks to improve the script’s quality.

Note 2: I’m not a programmer, so please don’t criticize—just offer suggestions if you have any. I use the script for myself, but if it’s useful to anyone else, I’d be happy.