This repo is released along side my blog: Dreaming over Text.
We took inspiration from deep dream, where input image is optimized to increase a hidden layer's activation.
- Convert sentence to tensors.
- Get the sentence embeddings.
- Pass through the fc2 layer, and get the fc2 output.
- Optimize the sentence embeddings to increase fc2 layer output.
- Repeat step 2 to step 4 with current sentence embeddings for given number of iteration.
I hate this.
Model correctly predicts this as negative.
First we observe what are the words similar (cosine similarity) to the sentence embeddings before and after dreaming.
Initially sentence embedding as more similar to neutral words like "this, it, even, same" but as we increased the magnitude of the fc2 activations, the sentence embedding became similar to words like "bad, nothing, worse" which convey a negative meaning, which is makes sense, as the model predicted it a negative sentence.
I love this show.
Model correctly predicts this as positive.
Initially sentence embedding as more similar to neutral words like "this, it, even same" but as we increased the magnitude of the fc2 activations, the sentence embedding became similar to positive words like "great, unique", which is makes sense, as the model predicted it a positive sentence.
You can try to explore the embedding and how sentence embedding change its location by clicking on this link.