takashinagata.com
about lifelog pics reading tech travel

tech

  • Back-to-Basics Weekend Reading - Dropout: A Simple Way to Prevent Neural Networks from Overfittin

    I noticed that I have never read the original dropout paper even though it’s very common and I’ve known it for a long time.

  • Creating a custom environment in OpenAI Gym - Blocking Maze

    This short post introduces how to create your own OpenAI gym environment.

  • Quick recap of basic exploration strategy in RL - epsilon-greedy

    Back to basics. I found that I didn’t write anything about reinforcement learning (RL) since I started blogging, although my research area has been RL for a while. I’m loving GANs too much these days :)

  • Conditional GANs (cGANs) and its variations

    Back to basics. As a series of my “reinventing-the-wheels” project to understand things well, I took some time to reimplement Conditional Generative Adversarial Nets from scratch. This is a note on it.

  • Fighting to mode collapse 1 - Feature Matching

    In this post, I’ll implement feature matching, a simple tehnique to mitigate GANs mode collapse.


Share this:
facebook twitter linkedin email
  • takashinagata.com
  • nagataka
  • nagataka

lifelog of a scientist/IT engineer

Copyright © Takashi Nagata 2017-2020 All rights reserved.