topics:  main-page   everything   99things   things-to-do   software   space   future   exercise & health   faith  
  thought   web   movies+TV   music   mymusic   food   curiosity   tidbits   I remember   wishlist   misc   links


This section lists all blog posts, regardless of topic.

Parsing Flight Searches Using GPT-3
July 18, 2020

https://danielbigham.blogspot.com/2020/07/parsing-flight-searches-using-gpt-3.html


Experiment: Can OpenAI's GPT-3 Write Wolfram Language Code?
July 16, 2020

Blog post on Wolfram Community


Learning
July 1, 2018

I work in the field of natural language understanding and in the last few years I've figured out how to use that very advantageously wrt learning.

Here's the approach I use:

Every time you come across a new term or concept, you create a new "notebook"/document. That's right, one notebook per concept. The title of the notebook is the name of the concept.

You create a summary for the concept using bullet points. What you're trying to maximize with this set of bullet points is the speed at which, in the future, you can re-read them and achieve a similar brain state to what you had when you originally learned the concept.

You can then obviously have extended notes below that where you go into more detail.

Then, crucially, you create something akin to a regex that will allow you to quickly and unambiguously look the concept up in the future. If you just learned what a rectified linear unit is, your pattern might simply be: rlu | (rectified linear unit)

You then have a hotkey on your computer -- I use Ctrl-Q, that brings up a text box where you can type the name of the concept you want to bring up (ex. "rlu"). When you press ENTER, it doesn't give you search results if there's an exact match, but instead directly opens the document and makes it instantly viewable / editable.

As your concept graph starts to grow, you have links within your notebooks to related concepts as they're referenced.

Each time you read an article that is important to your understanding of a concept, you quickly open up that notebook and add that article, and perhaps one or two bullet points that contain the key things you learned that expanded your sense of that concept.

This same system can be used for more than learning text book information. You can use it if your a project manager to keep tabs on the millions of things you have to juggle, you can have notebooks for people, for lists, and you can have "regexes" for "programs"/scripts, for web pages, for files/directories, etc, etc.

More general than "regexes" are context free grammars. In this context what that means is the ability to have named "subroutines" for your regexes. For example, if you end up using RLU as a sub-part of a lot of other notebook regexes, then you might define $rlu to be a short form for (rlu | (rectified linear unit)).

I'm also a person that loves exploring "idea space", especially as it relates to understanding intelligence, machine learning, etc. I've been using this approach the last few months for this as well -- any time I have an "aha" moment about a concept, or about how two concepts are related, I quickly flip open the appropriate notebook and add my idea.

When Elon talks about learning, he will sometimes talk about it being easy because you just hang a new piece of information on "the tree". What I suspect is that Elon's knowledge tree doesn't crumble as fast as my own. Most times when I learn a new concept or facet of a concept, I will forget it quite quickly. Now that I have a knowledge tree in digital form, I really do have a place to "hang" new bits of knowledge, and they don't get lost. Hopefully this is a more scalable approach to learning.

older >>