Discussões » Criação de Solicitações

Get HTTP status when page attempts to load, in a [URL Status Code: XXX] format

§
Publicado: 16/08/2022
Editado: 16/08/2022

Not sure if this is possible, but a simple code that when a user loads a page (enters a URL or clicks a link), will console.log a HTTP status of the page ("<URL> Status code: <Status_code>"). I'm working on a AFK system that automatically opens a page (either new or current tab), and extract links. The problem is that sometimes, rarely, pages may error 5XX (commonly 503) and therefore I miss that page on extracting that link. While I can simply just go to that loaded tab and refresh, that requires my presence, and therefore, to guarantee that I extract the links I have to not be AFK.

So my plan is this: extract links from visited URLs, view firefox's browser console and copy messages, on those messages, find URLs that had error 5XX, and reopen those URLs and repeat again until you are only left with URLs that always errors out and/or those URLs opened successfully.

While the console log already have this on the errors catagory, but it is in a weird format (on firefox, for example, would have this format as an example):

GEThttps://www.example.com
[HTTP/2 503 Service Unavailable 382ms]

Which is unwieldy.

§
Publicado: 17/08/2022
Editado: 17/08/2022

I guess you're talking about fetch and network requests?

var response;
(async function()
{ //Starts the function
response = await fetch('https://greasyfork.org/en/discussions/requests/144987-get-http-status-when-page-attempts-to-load-in-a-url-status-code-xxx-format'); //Fetch
const html = await response.text(); //Gets the fetch response
const newDocument = new DOMParser().parseFromString(html, 'text/html'); //Parses the fetch response

if (response.status !== 200){ //If fetch failed
throw('GET fetch failed');
}
})();

§
Publicado: 18/08/2022

Did tested this code based on what you give me:

// ==UserScript==
// @name     HTTP status
// @version  1
// @grant    none
// ==/UserScript==
var response;
(async function()
{ //Starts the function
response = await fetch(document.URL); //Fetch
const html = await response.text(); //Gets the fetch response
const newDocument = new DOMParser().parseFromString(html, 'text/html'); //Parses the fetch response

if (response.status !== 200){ //If fetch failed
  console.log(document.URL + " Load Failed! (status:" + response.status + ")")
} else {
  console.log(document.URL + " Load Success! (status:" + response.status + ")")
}
})();

But, uhh: I tested opening 10+ furaffinity pages at once to trigger the error 503 error page on some of the tabs, and it logged as “Load Success! (status:200)” on those tabs that errored out. On the browser console however will, correctly point out an error under the error filters. Here is a testing example of links you can open at once, (use this extension or something similar):

https://www.furaffinity.net/view/45431091/
https://www.furaffinity.net/view/44734008/
https://www.furaffinity.net/view/44531912/
https://www.furaffinity.net/view/44107287/
https://www.furaffinity.net/view/43796203/
https://www.furaffinity.net/view/43592430/
https://www.furaffinity.net/view/43502408/
https://www.furaffinity.net/view/42916466/
https://www.furaffinity.net/view/42334798/
https://www.furaffinity.net/view/41798589/
https://www.furaffinity.net/view/41017063/
https://www.furaffinity.net/view/40919930/
§
Publicado: 18/08/2022
Editado: 18/08/2022

Since you are actually not talking about fetch or any sorts of network request your solution is much easier

if (document.body.innerText.match(/Error 503|^Please wait a few seconds and try your request again.$/)[0] !== '') {
location.reload();
}

It makes the website reload until it successfully loads.

It's an extremely simply idea and solution, I first had that idea when I made this https://greasyfork.org/en/scripts/418200-malfunction-fix-errors-on-mal-text-autosaver

§
Publicado: 20/08/2022

a simple code that when a user loads a page (enters a URL or clicks a link), will console.log a HTTP status of the page (" Status code: ")

I'm working on a AFK system that automatically opens a page (either new or current tab), and extract links

How is that even related? Also if you need websites scraping, you need to use wget and puppeteer or alternatives, not a userscripts, lol

Publicar resposta

Faça o login para publicar uma resposta.