Sha256: 8c165de8875b3f996d8233c872a3ec801382c4343529257cbf7c69c81d7c4723

Contents?: true

Size: 1.31 KB

Versions: 1

Compression:

Stored size: 1.31 KB

Contents

---
id: cheatsheet
title: Cheatsheet
---

## Word representation learning

In order to learn word vectors do:

```bash
$ ./fasttext skipgram -input data.txt -output model
```

## Obtaining word vectors

Print word vectors for a text file `queries.txt` containing words.

```bash
$ ./fasttext print-word-vectors model.bin < queries.txt
```

## Text classification

In order to train a text classifier do:

```bash
$ ./fasttext supervised -input train.txt -output model
```

Once the model was trained, you can evaluate it by computing the precision and recall at k (P@k and R@k) on a test set using:

```bash
$ ./fasttext test model.bin test.txt 1
```

In order to obtain the k most likely labels for a piece of text, use:

```bash
$ ./fasttext predict model.bin test.txt k
```

In order to obtain the k most likely labels and their associated probabilities for a piece of text, use:

```bash
$ ./fasttext predict-prob model.bin test.txt k
```

If you want to compute vector representations of sentences or paragraphs, please use:

```bash
$ ./fasttext print-sentence-vectors model.bin < text.txt
```

## Quantization

In order to create a `.ftz` file with a smaller memory footprint do:

```bash
$ ./fasttext quantize -output model
```

All other commands such as test also work with this model

```bash
$ ./fasttext test model.ftz test.txt
```

Version data entries

1 entries across 1 versions & 1 rubygems

Version Path
fasttext-0.1.0 vendor/fastText/docs/cheatsheet.md