My Profile Photo

Joyce Xu

AI/ML researcher specializing in NLP, reinforcement learning, and distributed computing.

Aspiring bartender, DJ, and unemployed white male p'dcaster.

Chaotic neutral.

  1. 20: A tribute to my off-brand teenage-ism

    19, I learned that I liked all-nighters. I liked the ones at the techno clubs in Berlin, stumbling out into the bright and blissful 8am sun; I liked the ones pouring over my 229 homework, in over my head but so unable to tear myself away. They weren’t (aren’t) sustainable, mind you — I’ve always been quite aware of that. But I liked them.

  2. Beyond DQN/A3C: A Survey in Advanced Reinforcement Learning

    One of my favorite things about deep reinforcement learning is that, unlike supervised learning, it really, *really* doesn’t want to work. Throwing a neural net at a computer vision problem might get you 80% of the way there. Throwing a neural net at an RL problem will probably [blow something up]( in front of your face — and it will blow up in a different way each time you try.

  3. Topic Modeling: LSA, PLSA, LDA, & lda2vec

    In natural language understanding (NLU) tasks, there is a hierarchy of lenses through which we can extract meaning — from words to sentences to paragraphs to documents. At the document level, one of the most useful ways to understand text is by analyzing its *topics*. The process of learning, recognizing, and extracting these topics across a collection of documents is called topic modeling.

  4. Deep Learning for Object Detection

    With the rise of autonomous vehicles, smart video surveillance, facial detection and various people counting applications, fast and accurate object detection systems are rising in demand. These systems involve not only recognizing and classifying every object in an image, but *localizing* each one by drawing the appropriate bounding box around it. This makes object detection a significantly harder task than its traditional computer vision predecessor, image classification.

  5. An Intuitive Guide to Deep Learning Architectures

    Over the past few years, much of the progress in deep learning for computer vision can be boiled down to just a handful of neural network architectures. Setting aside all the math, the code, and the implementation details, I wanted to explore one simple question: how and why do these models work?

  6. Functional Programming for Deep Learning

    Before I started my most recent job at ThinkTopic, the concepts of “functional programming” and “machine learning” belonged to two different worlds entirely. One was a programming paradigm surging in popularity as the world turned towards simplicity, composability, and immutability to maintain complex scaling applications; the other was a tool to teach computers to autocomplete doodles and make music. Where was the overlap?

  7. Magic & Machines: Teaching Computers to Write Harry Potter

    I don’t have anything *against* plays, per se. I’m just as excited for *Harry Potter and the Cursed Child* as the next millennial who grew up staring out their window at night, waiting for their letter from Hogwarts, only to get screwed over by what can only be assumed was an incompetent owl delivery service. There’s just something about the magic of the books that seems untouchable — irreplicable. That being said, in honor of the upcoming play, I’m going to try to recreate a bit of that magic.