Sha256: 43c5b13d1202269be4cacfdd7deb16dfc169820cc89d6d41dcd99acf720d699a
Contents?: true
Size: 1.54 KB
Versions: 1
Compression:
Stored size: 1.54 KB
Contents
myshuf() { perl -MList::Util=shuffle -e 'print shuffle(<>);' "$@"; } normalize_text() { tr '[:upper:]' '[:lower:]' | sed -e 's/^/__label__/g' | \ sed -e "s/'/ ' /g" -e 's/"//g' -e 's/\./ \. /g' -e 's/<br \/>/ /g' \ -e 's/,/ , /g' -e 's/(/ ( /g' -e 's/)/ ) /g' -e 's/\!/ \! /g' \ -e 's/\?/ \? /g' -e 's/\;/ /g' -e 's/\:/ /g' | tr -s " " | myshuf } RESULTDIR=result DATADIR=data mkdir -p "${RESULTDIR}" mkdir -p "${DATADIR}" if [ ! -f "${DATADIR}/dbpedia.train" ] then wget -c "https://github.com/le-scientifique/torchDatasets/raw/master/dbpedia_csv.tar.gz" -O "${DATADIR}/dbpedia_csv.tar.gz" tar -xzvf "${DATADIR}/dbpedia_csv.tar.gz" -C "${DATADIR}" cat "${DATADIR}/dbpedia_csv/train.csv" | normalize_text > "${DATADIR}/dbpedia.train" cat "${DATADIR}/dbpedia_csv/test.csv" | normalize_text > "${DATADIR}/dbpedia.test" fi make echo "Training..." ./fasttext supervised -input "${DATADIR}/dbpedia.train" -output "${RESULTDIR}/dbpedia" -dim 10 -lr 0.1 -wordNgrams 2 -minCount 1 -bucket 10000000 -epoch 5 -thread 4 echo "Quantizing..." ./fasttext quantize -output "${RESULTDIR}/dbpedia" -input "${DATADIR}/dbpedia.train" -qnorm -retrain -epoch 1 -cutoff 100000 echo "Testing original model..." ./fasttext test "${RESULTDIR}/dbpedia.bin" "${DATADIR}/dbpedia.test" echo "Testing quantized model..." ./fasttext test "${RESULTDIR}/dbpedia.ftz" "${DATADIR}/dbpedia.test" wc -c < "${RESULTDIR}/dbpedia.bin" | awk '{print "Size of the original model:\t",$1;}' wc -c < "${RESULTDIR}/dbpedia.ftz" | awk '{print "Size of the quantized model:\t",$1;}'
Version data entries
1 entries across 1 versions & 1 rubygems
Version | Path |
---|---|
fasttext-0.1.0 | vendor/fastText/quantization-example.sh |