Cleans URL's from various popular sites.
< Părere la script-ul General URL Cleaner
So, I've tried this in Firefox for Windows, Firefox for Linux, Chrome for Windows, Chrome for Linux, and I'm not seeing this behavior.
One thing I changed and then changed back is that older versions iterate over all document.links whenever the mutationobserver detects a change in the document. And I've thought this was inefficient, so I changed it to searching for links in the mutation targets instead. However, it turns out that this is even more inefficient, and the total number of iterations over links is about 8 times what it was compared to iterating over document.links (I tested this), so I changed it back to just iterating over document.links.
I did just switch to making amazon use https though.
I think it's happening at line 76. If I revert back to the previous version I don't have problems.
The hang happens and then Firefox asks if I want to stop the busy script, and if I try to stop the script it doesn't unhang. I do have other Firefox addons installed, but this is the only Greasemonkey script I'm using.
What happens if you replace the cleanLinks function with this?
function cleanLinks(site) { new MutationObserver(function() {
if (doc.links.length > 0) for (var i=doc.links.length; i--;) {
a = doc.links[i];
if (a.href.startsWith('http')) {
old = a.href;
linkCleaners[site](a);
if (a.innerText==old) a.innerText=a.href;
if (a.title==old) a.title=a.href;
}
}
}).observe(doc,{childList:true,subtree:true});}
I'm just wondering if there's a condition where document.links.length is less than zero, causing the for loop to be an infinite loop.
Or perhaps just writing the for loop to run forwards instead of backwards:
function cleanLinks(site) { new MutationObserver(function() {
for (var i=0; i<=doc.links.length; i++) {
a = doc.links[i];
if (a.href.startsWith('http')) {
old = a.href;
linkCleaners[site](a);
if (a.innerText==old) a.innerText=a.href;
if (a.title==old) a.title=a.href;
}
}
}).observe(doc,{childList:true,subtree:true});}
What happens if you replace the cleanLinks function with this?
function cleanLinks(site) { new MutationObserver(function() { if (doc.links.length > 0) for (var i=doc.links.length; i--;) { a = doc.links[i]; if (a.href.startsWith('http')) { old = a.href; linkCleaners[site](a); if (a.innerText==old) a.innerText=a.href; if (a.title==old) a.title=a.href; } } }).observe(doc,{childList:true,subtree:true});}
I'm just wondering if there's a condition where document.links.length is less than zero, causing the for loop to be an infinite loop.
I had the same problem as OP, this solves it.
That fix definitely looks like a winner!
The souce in the above post is slightly off - the > got stripped out and replaced with the HTML code for it...
Now I realised there is no more crash in YouTube video pages but I still get crashes when searching for videos. https://www.youtube.com/results?search_query=*insert anything*
Try this:
function cleanLinks(site) { new MutationObserver(function() {
for (var i in doc.links) { a = doc.links[i];
if (a.href) if (a.href.startsWith('http')) {
old = a.href;
linkCleaners[site](a);
if (a.innerText==old) a.innerText=a.href;
if (a.title==old) a.title=a.href;
}
}
}).observe(doc,{childList:true,subtree:true});}
Try this:
function cleanLinks(site) { new MutationObserver(function() { for (var i in doc.links) { a = doc.links[i]; if (a.href) if (a.href.startsWith('http')) { old = a.href; linkCleaners[site](a); if (a.innerText==old) a.innerText=a.href; if (a.title==old) a.title=a.href; } } }).observe(doc,{childList:true,subtree:true});}
Still not working.
Is it only not working on youtube searches?Try this:
function cleanLinks(site) { new MutationObserver(function() { for (var i in doc.links) { a = doc.links[i]; if (a.href) if (a.href.startsWith('http')) { old = a.href; linkCleaners[site](a); if (a.innerText==old) a.innerText=a.href; if (a.title==old) a.title=a.href; } } }).observe(doc,{childList:true,subtree:true});}
Still not working.
Is it only not working on youtube searches?
Yes, it only doens't work on YT searches. The last working version is 2.2.2.2.
Can confirm - YT isn't working. I also still have to use the replacement cleanLinks posted above.
In what way isn't it working? I need more information.
Hi I got the same issue as described above.
When I do some searching with YT, FireFox will freeze for a long time.
Firefox return this message:
So, you're getting this issue even with the forward for loop?
for (var i=0; i<=doc.links.length; i++)
It doesn't make sense to me that the standard forward for loop should cause an infinite loop, even if document.links.length
is -1. Though I can see how a reverse for loop might.
I suppose I could try this:
if (doc.links.length>10) for (var i=0; i<doc.links.length && i<8192; i++)
Which will only start iterating when there are at least 10 links (total) on the page, and stop iterating if variable i reaches 8192 no matter what (newegg seems to have by far the most links of any site this script runs on, at ~2400 links)
@Knowbody , I've tried the change you suggested, but the script still freeze.
I even tried to disabling all other scripts, it still freeze. I've no other idea.
What happens if you use this instead?
function cleanLinks(site, remain=false) { new MutationObserver(function(_,self) {
for (var i=0, l=doc.getElementsByTagName("a"); i<l.length; i++) { a = l[i];
if (a.href) if (a.href.startsWith('http')) {
old = a.href;
linkCleaners[site](a);
if (a.innerText==old) a.innerText=a.href;
if (a.title==old) a.title=a.href;
}
}
if (!remain && doc.readyState=='complete') setTimeout(function(){self.disconnect();}, 20000);
}).observe(doc,{childList:true,subtree:true});}
@Knowbody , the search process is slower compare if this script disabled.
However this time it does not freeze.
Thank you.
Was it the updated script that worked, or the function above?
@Knowbody , yes I updated the script with the last function where the loop also test the doc.getElementsByTagName.
Last update broke the script
Firefox hangs due to this script now, on any site that uses it, including Amazon, YouTube, and Newegg.
P.S. Amazon supports HTTPS now; Amazon on my machine always uses HTTPS and the script doesn't work without modifying to use HTTPS in the cleanAmazon functions.