Feedback

In Chapter 24, “Advanced Holy Temple Magician,” we mentioned how to advance 10,000 years ago.

In addition to improving magic, Seniors also have a magical skill called “feedback”.

This method first gives affirmation to the other party (pengsha), then tactfully points out areas that need improvement (quedian), and finally achieves the effect of reducing the other party’s mana to zero.


Looking at the few senior friends of the young magician, they only use this technique when they are on the brink of the deadline.

It seems that feedback is indeed an advanced skill.

Unfortunately, our young magician is still in the junior stage and faces two obstacles:

  1. Poor verbal communication skills.
  2. Social anxiety.

Feedback has become a barrier that the young magician cannot overcome.


One day, the young magician was worried about the approaching deadline while drinking copious amounts of coffee. Under the influence of caffeine, he had a sudden idea.

Although he is relatively weak, it does not mean he cannot summon the mighty dragon.

For example, the Recurrent Neural Network (RNN) in alchemy is very suitable for the current situation.

Alchemy

Speaking of alchemy, it used to be quite mysterious. Only a few magicians understood its secrets.

Without knowing the truth, the young magician completed Andrew Ng’s ML course on Coursera and bought a certificate 📄 to prove his innocence. Although he has long forgotten everything, he still has some vague concepts.

Now it is already the year 9012, and alchemy has been simplified to just preparing materials without much thought.

1
2
3
mkdir feedback
cd feedback
mkdir -p datasets weights outputs

The young magician searched and found some phrase examples. He blindly copied the content to datasets/data.txt.

Then he filtered out positive and negative feedback.

1
2
3
4
cd datasets

cat data.txt | grep ✓ | sed 's/✓ //' > 👍.txt
cat data.txt | grep ✗ | sed 's/✗ //' > 👎.txt

After preparing the materials, it was time to set up the alchemy furnace.

The young magician found that textgenrnn was good and easy to understand.

1
2
cd ..
pip3 install -I textgenrnn tensorflow

Set up the alchemy steps.

Start refining.

1
2
3
4
5
6
7
8
9
10
# python3 training.py
from textgenrnn import textgenrnn

textgen = textgenrnn()

textgen.train_from_file('datasets/👍.txt', num_epochs=1)
textgen.save('weights/👍.hdf5')

textgen.train_from_file('datasets/👎.txt', num_epochs=1)
textgen.save('weights/👎.hdf5')

Then the young magician’s MacBook Pro’s CPU spun madly, bursting with joyful fireworks 🎆

1
2
3
4
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Epoch 1/1
328/559 [================>.............] - ETA: 36s - loss: 1.2788

Load the well-refined 🥚 and take a look.

1
2
3
4
5
6
7
8
# python3 testing.py
from textgenrnn import textgenrnn

print('👍')
textgenrnn('weights/👍.hdf5').generate_samples(prefix="He")

print('👎')
textgenrnn('weights/👎.hdf5').generate_samples(prefix="He")

Hmmmmmmmmmmmmm

However, the results were disappointing 💔💔💔


The plan failed.

The experiment was not continued, but increasing epochs or layers may improve the results.

Repo: Feedback

However:

It’s definitely because of insufficient data.

The young magician said firmly with anger.

😠😠😠

Translated by gpt-3.5-turbo