You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're now using Hound to index a fairly large codebase >1.5GB and I just noticed that there are 1,000s of files being ignored if they have ANSI encoding instead of UTF-8. They appear in the excluded_files.json file
Is this something that can be fixed within Hound or will I likely have to figure out how to change the encoding of all these files?
Thanks,
Alex.
The text was updated successfully, but these errors were encountered:
(You can set up the script to only work on specific file extensions like I have or change it to suit.)
5. Save and close the script file, then NP++ itself. (This is necessary to get Python Script to pick up the newly saved script file)
6. Reopen NP++ and select your script file (whatever it's called):
7. Go and grab a coffee, because if (as I did) you have 20,000+ files to check it will take some time. Whilst the script is running the system will fight with NP++ for focus as it cycles through the folder structure you pointed it at.
FWIW: I am using an NVMe Samsung SSD and this script barely taxes it; if you have a regular spinny HDD then you might need to be selective where you point it.
Hi all,
We're now using Hound to index a fairly large codebase >1.5GB and I just noticed that there are 1,000s of files being ignored if they have ANSI encoding instead of UTF-8. They appear in the excluded_files.json file
Is this something that can be fixed within Hound or will I likely have to figure out how to change the encoding of all these files?
Thanks,
Alex.
The text was updated successfully, but these errors were encountered: