stopword
is a module for node and the browser that allows you to strip
stopwords from an input text. Covers 62 languages. In natural language processing, "Stopwords" are words that are so frequent that they can safely be removed from a text without altering its meaning.
All .min
minified files are approximately 130 Kb each.
Language codes are changed from ISO-639-1
(two characters) to ISO-639-3
(three characters). This to have room for more small languages that wasn't specified in ISO-639-1
.
If you haven't specified any stopword lists and just gone with the default (which is English), it should continue working without the need for any change.
Deconstruction require:
const { removeStopwords, eng, fra } = require('stopword')
// 'removeStopwords', 'eng' and 'fra' available
Old style require:
const sw = require('stopword')
// sw.removeStopwords and sw.<language codes> now available
Deconstruction import:
import { removeStopwords, eng, fra } from './dist/stopword.esm.mjs'
// 'removeStopwords', 'eng' and 'fra' available
Old style import:
import * as sw from './dist/stopword.esm.mjs'
// 'sw.removeStopwords' + 'sw.<language codes>' available
<script src="https://cdn.jsdelivr.net/npm/stopword/dist/stopword.umd.min.js"></script>
<script>
// sw.removeStopwords and sw.<language codes> now available
</script>
import * as sw from 'stopword'
// 'sw.removeStopwords' + 'sw.<language codes>' available
import { removeStopwords, eng, fra } from 'stopword'
// 'removeStopwords', 'eng' and 'fra' available
npm i @types/stopword --save-dev
By default, stopword
will strip an array of "meaningless" English words
const { removeStopwords } = require('stopword')
const oldString = 'a really Interesting string with some words'.split(' ')
const newString = removeStopwords(oldString)
// newString is now [ 'really', 'Interesting', 'string', 'words' ]
You can also specify a language other than English:
const { removeStopwords, swe } = require('stopword')
const oldString = 'Trädgårdsägare är beredda att pröva vad som helst för att bli av med de hatade mördarsniglarna åäö'.split(' ')
// swe contains swedish stopwords
const newString = removeStopwords(oldString, swe)
// newString is now [ 'Trädgårdsägare', 'beredda', 'pröva', 'helst', 'hatade', 'mördarsniglarna', 'åäö' ]
Extract numbers (korean script/characters) with module words-n-numbers
and removing 0-9 'stopwords'
const { removeStopwords, _123 } = require('stopword')
const { extract, words, numbers } = require('words-n-numbers')
const oldString = '쾰른 대성당(독일어: Kölner Dom, 정식 명칭: Hohe Domkirche St. Peter)은 독일 쾰른에 있는 로마 가톨릭교회의 성당이다. 고딕 양식으로 지어졌다. 쾰른 대교구의 주교좌 성당이라 쾰른 주교좌 성당이라고도 불린다. 현재 쾰른 대교구의 교구장은 라이너 마리아 뵐키 추기경이다. 이 성당은 독일에서 가장 잘 알려진 건축물로, 성 바실리 대성당에 이어, 1996년 유네스코 세계유산으로 등재되었다. 유네스코에서는 쾰른 대성당을 일컬어 “인류의 창조적 재능을 보여주는 드문 작품”이라고 묘사하였다.[1] 매일 2만여 명의 관광객이 이 성당을 찾는다.[2]'
let newString = extract(oldString, { regex: [numbers] })
newString = removeStopwords(newString, _123)
// newString is now [ '1996' ]
})
And last, but not least, it is possible to use your own, custom list of stopwords:
const { removeStopwords } = require('stopword')
const oldString = 'you can even roll your own custom stopword list'.split(' ')
// Just add your own list/array of stopwords
const newString = removeStopwords(oldString, [ 'even', 'a', 'custom', 'stopword', 'list', 'is', 'possible']
// newString is now [ 'you', 'can', 'roll', 'your', 'own']
With spread syntax you can easily combine several stopword arrays into one. Useful for situations where two langauages are used interchangeably. Or when you have certain words that are used in every document that is not in your existing stopword arrays.
const { removeStopwords, eng, swe } = require('stopword')
const oldString = 'a really interesting string with some words trädgårdsägare är beredda att pröva vad som helst för att bli av med de hatade mördarsniglarna'.split(' ')
const customStopwords = ['interesting', 'really']
const newString = sw.removeStopwords(oldString, [...eng, ...swe, ...customStopwords]
// newString is now ['string', 'words', 'trädgårdsägare', 'beredda', 'pröva', 'helst', 'hatade', 'mördarsniglarna']
Returns an Array that represents the text with the specified stopwords removed.
text
An array of wordsstopwords
An array of stopwords
const { removeStopwords } = require('stopword')
var text = removeStopwords(text[, stopwords])
// text is now an array of given words minus specified stopwords
Language codes follow ISO 639-3 Language Code list. Arrays of stopwords for the following 62 languages are supplied:
_123
- 0-9 for different script (regular, Farsi, Korean and Myanmar)afr
- Afrikaansara
- Arabic, Macrolanguagehye
- Armenianeus
- Basqueben
- Bengalibre
- Bretonbul
- Bulgariancat
- Catalan, Valencianzho
- Chinese, Macrolanguagehrv
- Croatiances
- Czechdan
- Danishnld
- Dutcheng
- Englishepo
- Esperantoest
- Estonian, Macrolanguagefin
- Finnishfra
- Frenchglg
- Galiciandeu
- Germanell
- Greek, Modernguj
- Gujaratihau
- Hausaheb
- Hebrewhin
- Hindihun
- Hungarianind
- Indonesiangle
- Irishita
- Italianjpn
- Japanesekor
- Koreankur
- Kurdish, Macrolanguagelat
- Latinlav
- Latvian, Macrolanguagelit
- Lithuanianlgg
- LugbaralggNd
- Lugbara, No diacriticsmsa
- Malay, Macrolanguagemar
- Marathimya
- Myanmar (Burmese)nob
- Norwegian bokmålfas
- Persian (Farsi)pol
- Polishpor
- PortugueseporBr
- Portuguese-BrazilianpanGu
- Punjabi (Panjabi), Gurmukhi scriptron
- Romanian (Moldavian, Moldovan)rus
- Russianslk
- Slovakslv
- Sloveniansom
- Somalisot
- Sotho, Southernspa
- Spanishswa
- Swahili, Macrolanguageswe
- Swedishtgl
- Tagalog (Filipino)tha
- Thaitur
- Turkishukr
- Ukrainianurd
- Urduvie
- Vietnameseyor
- Yorubazul
- Zulu
jpn
Japanese, tha
Thai and zho
Chinese and some of the other languages supported have no space between words. For these languages you need to split the text into an array of words in another way than just textString.split(' ')
. You can check out SudachiPy, TinySegmenter for Japanese and chinese-tokenizer, jieba, pkuseg for Chinese.
If you can't find a stopword file for your language, you can try creating one with stopword-trainer
. We're happy to help you in the process.
Most of this work is from other projects and people, and wouldn't be possible without them. Thanks to among others the stopwords-iso project and the more-stoplist project. And thanks for all your code input: @arthurdenner, @micalevisk, @fabric-io-rodrigues, @behzadmoradi, @guysaar223, @ConnorKrammer, @GreXLin85, @nanopx, @virtual, @JustroX, @DavideViolante, @rodfeal, @BLKSerene, @dsdenes, msageryd and imposeren!
Licenses for this library and all third party code.