control characters

September 27, 2015

A large part of my job is making sure data gets from point A to point B, specifically by way of writing SQL scripts that pull that data from my company’s tables (we provide cloud HRIS software) and format it in such a way that some other company’s servers can parse it correctly and load it into their tables.  That involves a significant amount of learning about and being familiar with a number of different file layouts, some standardized (like the ANSI 834 file, which is used to communicate benefits coverage information to insurance companies) and some proprietary.  One such file I was dealing with recently was an ACH file — these are standard payroll files that send direct deposit information from various institutions like mine to banks.  The problem I was having was that we were sending the file to a particular bank, and they were giving me a cryptic error, saying that every line of the file had 2 extra characters at the end: ^M.  This was the first ACH file I had done, and I knew that those characters weren’t anywhere in the file I was sending, so at first I was thoroughly confused.

After some research, I found the culprit by finding a number of blog posts mentioning that on UNIX systems in certain cases, the standard Windows line ending (LF/CR, line feed/carriage return) shows up as ‘^M’.  This is because the standard ‘\n’ command to create a new line is actually a sequence of two control characters (control characters are just ASCII characters that you don’t actually see in WYSIWYG editors; they do things other than signify letters and numbers that involves controlling the flow of the document).  The first is the LF or line feed character, and the second is the CR or carriage return character.  These are also known as CHAR(10) and CHAR(13) respectively since these are their ASCII values.  In almost all situations this is what you want to end a line in a file, but on ACH files (and certain other contexts), the carriage return character causes a problem.  I confirmed the issue by opening my file in hex format (I use Sublime Text 2, but you can open a file in hex format in any number of text editors), and saw that every line ended with the 2 characters 0a0d, or 0x0a 0x0d as hex characters are often written.

The next step was to actually formulate a resolution.  At work I use an internal development framework for generating text files that automatically creates a lot of the SQL I use so that I don’t have to type it over and over again, so I had never actually specified the line ending in a BCP statement (the bulk copy statement that takes a SQL result table and outputs it to a file, such as a CSV file that can be opened in Excel).  But, there’s a first time for everything.  I specified that the line endings should be just the line feed character by using ‘-r 0x0a’ in the BCP command.  This tells SQL to use just the line feed character as opposed to the Windows standard LF/CR.  And presto — perfect file that was accepted by the bank.

I disagree with Ted Cruz on a lot of subjects.  We have different opinions on pretty much everything people have opinions on.  But that’s fine.  I don’t like his opinions, but I do like living in a country where we can have opinions as different as ours are and neither of us is in jail because of them.  Our political differences really aren’t relevant for what I’m going to talk about here; this isn’t a blog about politics, it’s a blog about science and technology.

The reason we should be angry that Ted Cruz is chairing the Science and Space subcommittee is that Ted Cruz does not believe in science.  Maybe he says he does; I don’t know.  But I do know that if you don’t believe that global warming is happening, you don’t believe in science.  As I mentioned in my previous post, it is an incontrovertible fact that the earth is getting warmer.  To me it’s clear that human industry is a major (though not singular) cause of this warming, but I acknowledge that this is interpretation rather than fact.

However, Ted Cruz has gone on record as saying that the earth is in fact not warming.  He bases this supposedly on data from the past 15 years.  This article provides some good news: the earth has warmed less than we thought it would in the past 20 years.  However, it also shows that the global temperature is rising (0.14 +/- 0.06 degrees C) per decade.  In addition, it doesn’t really matter what the last 15 years show.  If we’re talking about long-term global temperature patterns, 15 years is nothing.  Even in the past 150 years we’re only talking about a couple degrees.

Having Ted Cruz chair a subcommittee on science is like Catholics electing an atheist Pope: you’re being led by someone who completely opposes your entire mission.  I hate that Cruz is going to slash funding for NASA, but that’s just a difference in opinion.  I believe in, and will argue vehemently for, the importance of space exploration, but I can’t say that Cruz doesn’t believe in space exploration.  More likely he believes that government funds should be spent on other things.  I might not like it, but that doesn’t make him a bad leader.  What does make him a bad leader is that he does not believe climate change is happening.  It is a logical impossibility to believe in the scientific method and yet reject the reality of global warming, ergo Ted Cruz does not believe in science.

Not everyone values science.  I don’t like those people, but according to my own person beliefs, they have as much value as I do.  I’m not saying Ted Cruz should be thrown in jail, and I’m not saying (at least based on the topic under discussion) that he should be thrown out of Congress.  What I am saying is that it is patently ridiculous to have someone who rejects science overseeing scientific establishments.  I would think that this is one thing everyone could agree on: if we believe in the value of science, and if we believe that our country has a scientific mission to explore new areas (both geographically and ideologically), then we should have someone in charge who acknowledges the value of what he is overseeing.  Ted Cruz is not such a person, and Ted Cruz is the wrong person for this specific job.

(As always, I welcome dissenting opinions that are data-driven and don’t advertise a specific agenda.  If I come to your home and insult you to your face you have every right to throw me out, and if you submit a comment that is insulting or intellectually vacuous I have every right to reject it.)

Global warming

December 6, 2014

That “global warming” is a controversy is a travesty.  There is much to debate about global warming: to what degree it is caused by humans, what its effects will be, what we should do about it, etc.  But global warming itself should not be under debate, because it’s simply a fact.  That the temperature of the earth is rising is on par with the distance from the earth to the sun: it is an established fact based on math and careful measurements.  Right now average temperatures are more than a degree (Celsius) above what they were during the Little Ice Age 400 years ago.  One degree may not sound like a lot, but until the 20th century, global temperatures had fluctuated (up and down) less than a degree for the past 2000 years.

Those are the facts.  The question of whether humans are the primary cause of global warming is less clear, because it relies not on measurements but on interpretation of that data.  It is almost certainly the case that humans are not the only cause of global warming.  The earth has been rebounding from a dip in temperature in the 1600s, and on a larger scale from the Pleistocene ice age that was still in full force when humans were migrating to the Americas.  However, it is also certainly the case that humans are contributing to global warming.  We have raised the levels of atmospheric carbon dioxide to what are record levels for the last 400,000 years, and carbon dioxide is well-documented to be a “greenhouse” gas that raises the temperature of the earth when present in the quantities it exists in today.  Still, for a long time I wasn’t convinced that humans were the primary cause of global warming.  It was this graph that changed my mind.  We can see a gradual increase in temperature starting in the 1600s, and then around 1900 a huge steep spike in the rate of change.  I have yet to see any argument that the most obvious interpretation of the data is that the industrial revolution and following sustained carbon emissions have accelerated the warming of the planet.

The final question is: should we be worried about it?  Even if we accept that global warming is happening and that humans are the primary cause, does that mean we need to seriously change our way of life?  In my opinion, yes.  That’s not to say that high carbon dioxide levels would somehow ruin the planet.  A hundred million years ago carbon dioxide levels were more than 20 times what they are now.  The earth will get along fine with pretty much whatever we can throw at it.  What we should we worried about is the widespread extinction of plant and animal species — including us.  Life has trouble evolving to large, sudden changes, as evidenced by the mass extinction after the Chicxulub impact, which killed off many species, including (eventually) all species of dinosaur.  Life will continue regardless of what we do to the environment, but we may not weather it if the food crops we depend on and the plant and animal species we get our medical breakthroughs from disappear.  In addition, there’s the question of sea level change.  Sea levels have risen in the past 100 years, and will continue to do so at a faster rate over the next 100 years.  This isn’t a problem in and of itself for the same reason that the continent layout 500 million years ago was somehow “better” than it is now.  The problem is that we’ve arranged our cities and our lives in a certain way, and we won’t be able to sustain that arrangement given the current acceleration of global warming and sea level change.  Many coastal cities will be underwater in a few generations if we continue at the current pace.  That certainly doesn’t spell the end of our species, but it would be a monstrous public works project requiring tens of millions of people to relocate.  I think the easier option is to limit carbon emissions by switching away from fossil fuels (which we only have enough of for another century even by the most blindingly naive estimates).  I’m not saying “Listen up America, YOU MUST DO THIS.”  I’m just saying reducing carbon emissions, while enormously expensive and difficult, is a much easier option than dealing with more unpredictable and violent weather, rapidly rising sea levels, and the behemoth infrastructure changes that would necessarily have to be made because of these problems.


November 29, 2014

This will be the second in a list of (at least) 3 somewhat controversial topics that I wanted to address on this blog.  I don’t expect everyone to agree on these issues, but I like to try to state in objective terms what the facts are so that when we argue we can do so from the same starting place.  The issue of Genetically Modified Organisms (GMOs) is being hotly debated right now, and I suspect that for many people it comes down to the same old fear of science that we have dealt with in other arenas.  People fear doing something a new way because it’s unproven and unproven things can be dangerous.  For me, one of the issues that isn’t discussed enough is the issue of the impact of GMOs on the world food supply.  In terms of quantity and quality of food, GMOs can offer great increases, increasing food security in many countries where farming is difficult with traditional crops and traditional methods.  While it may be admirable for anti-GMO advocates to question the safety of GMOs and make sure nothing untoward can come of them before more widespread implementation, we should also remember that restrictions on GMOs are killing people right this very second.

So what is a GMO, really?  As far as I’m concerned, everything is technically a genetically modified organism.  The first organism was a microscopic single-celled creature that probably survived on photosynthesis, so everything alive today has undergone a lot of genetic modification.  In addition, virtually all of the food we eat has heavily modified genes.  Don’t want genetically modified corn?  Have fun eating teosinte, the wild grass that corn was domesticated from thousands of years ago.  There is no such thing as “natural” corn, and the corn we eat today (or, in fact, the corn people ate 500 years ago), bears almost no resemblance to the grass it is derived from.  This is not because of modern science but because of traditional selective breeding techniques.  Now, of course, when most people talk about GMOs they are excluding selective breeding and talking about direct genetic modification.  Personally I think this is an artificial distinction but I acknowledge that there is a specific technical and biological difference between the two methods.  The real question is whether there is a reason to think that horizontal rather than vertical gene transfer can result in species that are more dangerous to us.

Usually when we talk about transferring genes in nature, we’re talking about vertical gene transfer: two organisms mate sexually and combine their genes into a new offspring with traits stemming from both parents.  However, not all organisms mate sexually (in fact, many do not, and there is significant debate about why sexual reproduction exists at all), and we can also end up with gene modification due to random mutations.  In addition, there is also rampant horizontal gene transfer in nature, such as in many species of bacteria, where individual bacteria swap genes with each other not in a reproductive act but just in the course of normal interaction.  Thus horizontal gene transfer is very natural and occurs without human intervention.  For that reason I think it’s better to think of genetic modification as similar to in vitro fertilization rather than, say, cloning: we’re not doing anything that doesn’t already happen in nature.  It’s not The Truman Show; it’s a blind date.  We just put the genes that we want to end up in a new strain together and hope they like each other.

We can approach the issue of whether or not GMOs are dangerous from two methods: a priori (logically, by thinking about possible reasons why the gene modification process could create strains that would poison us) or a posteriori (by looking at evidence of danger that already exists, since people have been eating GMOs now for several decades).  A priori, there is no specific reason to suspect horizontal gene transfer would create poisons in food.  As mentioned above, horizontal gene transfer already happens naturally between bacteria.  A posteriori, there has been no clear evidence that GMOs are somehow biologically different from other food sources.  Perhaps their origin is more artificial, but the resulting plant is just as natural as any other plant.  Saying GMO corn is somehow corrupt or dangerous is like saying humans are really bad monkeys.  The main position of the anti-GMO camp is that we don’t know that they are safe.  I say: fair enough.  We don’t really have enough data yet, and we may never, to prove beyond a shadow of a doubt that GMOs are always safe to consume.  So the real question is whether it is ethical to introduce GMOs into the food supply without this data.

If we had plenty of food, then this would basically be an academic argument.  “We don’t need this technology, so let’s do a lot more testing before we even consider introducing it to the world.”  Well, unfortunately that’s not the case.  Even in the United States, which is one of the richest countries in the world, up to 20% of children are food-insecure (depending on your definition of food-insecure).  In many, many other countries, this number is greater than 50% and many children (and adults) die of starvation or malnutrition every year.  Since we have decades of data that suggests (though certainly does not prove) that GMOs are safe, I think this is a compelling argument for allowing companies to use genetically modified food, at least where it is needed most.  I think the real danger is not in people eating this food, but companies using these methods to make more profit without passing the savings along to those who really need it.  And that raises the questions of whether GMOs should be marked as such.  Personally, I think the stand-up thing to do in the current food climate is for companies to label GMOs, so that people can make their own decisions.  However, I don’t know if I would be in favor of a law to require such labeling, since this is one more hurdle standing between starving people and food production methods that could save their lives.


September 13, 2014

Today I want to talk about fluoride — but probably not for the reason you think.  (OK, maybe a little bit for the reason you think.)  Ridiculously (in my opinion), the fluoridating of public water has become a hot-button topic in the past few decades, with a vocal and sometimes sizable contingent arguing that adding fluoride to public water is harmful to your health and shouldn’t be done.  Why?  There are any number of pseudo-scientific reasons, most of which are specifically debunked by the AAP:  I won’t address these issues because many people have commented on them, and because the medical establishment keeps so many relatively safe drugs out of the United States that if the FDA says something is safe, I believe them.  Case in point: the therapeutic dose for ibuprofen for a guy my size is around 1200mg, yet this is six times the recommended dose on the bottle, and equal to the maximum recommended daily dose.

What I really want to talk about is chemical terminology, because that right there gives us a clue that we don’t need to be that worried about fluoride.  Generally speaking, lifeforms are neophobic (they fear new things), because until something is proven safe, it’s safer to avoid it.  This has become part of our genome, and virtually all species, including humans, display neophobia in many areas of interaction with their environment.  Because many people don’t care or know much about chemistry, they don’t know what chemical terms mean, and this often associates a negative outlook with them.  Because of this, people are often afraid of “chemicals”, even though all life is chemical.  One of the more outrageous claims I’ve come across is that fluoridated water is essentially a form of mind control used by the government to keep the populace docile.  This is impossible, and I’ll tell you why: it absolutely has to do with the fact that your toothpaste says “sodium fluoride” and not “sodium pentathal”.

Sodium thiopental, trademarked as sodium pentathal, is often known as a “truth serum”.  In fact it’s an anesthetic which, using the same chemical method as alcohol, makes it more difficult for you to lie because it suppresses higher brain function.  Truth serums work essentially the same way as a case of Natty Light: they make it hard to think straight, and if you consume too much, they make it hard to breathe straight.  So why am I bringing up this other “scary” chemical?  Because it doesn’t really have a name that we can break down.  I assume the “pent” has something to do with five, either as a number of molecules or an attachment site on a hydrocarbon chain, the “thio” tells us there’s sulfur in it, and the “al” probably means it’s something related to alcohol.  The full IUPAC name of this drug is [5-ethyl-4,6-dioxo-5-(pentan-2-yl)-1,4,5,6-tetrahydropyrimidin-2-yl]sulfanide sodium according to Wikipedia, so you know it’s serious.  That’s a bunch of words all describing different molecules, with the numbers describing where they’re attached to each other.  These are the types of molecules that can have subtle and mind-altering effects.  Often it’s also complex organic molecules that cause cancer, but there are also simple things that cause cancer, like radioactive elements.

Now I’ll give you the full chemical formula for fluoride.  Here it is: F.  That’s it.  It’s one atom.  As sodium fluoride it is exactly two atoms.  Fluoride is not a complex organic molecule, and as a result we don’t expect it to have complex and subtle effects on the human body.  In fact, it doesn’t.  Fluoride in high concentrations is really, really dangerous.  We know we’re not being poisoned by fluoride in the water because we’re still alive.  Liquid or gaseous fluorine will eat your face off in a second.  That’s why fluoride levels in public water are 0.7 mg/L.  At these levels you’d have to drink over 7000L of water (a liter is close in size to a quart, so we’re talking thousands of gallons) to hurt yourself.  If you drink just 8 glasses of water a day that’s 10 years worth of water — and you’d have to drink it within a short period of time to get lethal toxicity from fluoride.  That’s not to say that simple chemicals can have serious effects over long time periods.  Products containing bromide, another halogen ion, have been removed from OTC drugs since the 1970s for that very reason.  So maybe what I’m really saying is that it’s not that fluoride can’t have negative effects.  Maybe what I’m saying is that we know fluoride is toxic in the wrong doses, and we know fluoride is harmless in the right doses, and we have years and years of research into whether there are negative effects to long-term ingestion of low doses of fluoride, and all that research says its fine.  Does that mean it’s literally impossible that fluoride has negative effects that the mainstream scientific establishment hasn’t discovered?  No.  But it seems very, very unlikely to me, and the downsides of not having fluoridated water are well-documented and serious (dental caries, or cavities, are one of the biggest money pits in personal and societal medical care).

NOTE ABOUT COMMENTS: I always welcome opposing viewpoints, but I’m not going to publish comments that are merely antagonist, or that make claims without supporting peer-reviewed evidence or at least a solid logical argument.  This is my personal blog and comments on it that I publish reflect on me as a writer and as a person.

A couple weeks ago Kurt posted a comment on my System76 review, asking if I thought the System76 hardware was better than the hardware on  a comparably priced Dell laptop, i.e., if it was worth getting System76’s Ubuntu laptop rather than a Windows laptop dual-booting with Ubuntu.  This is a worthwhile question, since I mentioned in my earlier review that my primary reason for going with System76 was ethical: I value open-source software, and I want to give my money to a company that values that as well.  However, in a word, I think the answer to Kurt’s question is “yes”.  I’ll summarize some reasons why.

I owned 2 Dell Inspirons (sequentially) before buying my current laptop, the first in the $1000-$2000 range, the second in the <$500 range.  Both had the same issue, which ultimately was one major reason I swore off Dell: the screen hinge breaks under reasonable use after a couple years.  For the first (more expensive) Latitude, the problem started happening just after the 3-year (warrantied) mark, and progressed to making the hinge non-functional less than a year later.  I’m not willing (or able) to spend more than ~$300 per year on a laptop, so if I spend $1500 on a laptop, I expect to get some wear out of it.  That was simply not the case (no pun intended) with my Dells.  The second (cheap) one I mostly abandoned before the hinge failed completely due to an LCD problem (not Dell’s fault — liquid damage), but even in the time I used it (less than 2 years) cracks had formed on the case near the hinges, and the hinge itself was starting to wear.

What I would love to do is some accurate benchmark testing, but since the systems I have access to vary widely, I won’t be able to give some really useful numbers.  As I mentioned in the original review, my Gazelle Pro boots to login in about 20 seconds, with another 5 to actually log in.  My old Dell which dual-boots Ubuntu boots to login in about 35 seconds, with about 10 to log in, for a total of about 45 seconds from power up to desktop.  It has an older and more streamlined Ubuntu installation, and the same amount of memory, but it does have a Celeron processor (please hold your groans until the end of the presentation), so I don’t think it’s fair to compare these two systems.  Kurt mentioned the Dell Latitude as a possible comparable system, but in my experience it’s the lesser-quality Inspiron that’s in a similar price range (my work laptop is a Latitude and runs around $2500).

I checked out the Dell web site and found an Inspiron 15 for $750 that roughly compares to the System76 Gazelle Pro.  The Gazelle, similarly configured, goes for $843.  At that price the Gazelle has a slightly better video card (both are Intel, the Gazelle a 4600 vs. the Inspiron’s 4400) and better HD display.  The processors and WiFi options are comparable, and I selected the same memory and hard drive for each.  For me there are two big advantages to the System76: it has a CD/DVD drive, and it has a larger capacity battery (62.16 6-cell vs. the Inspiron’s 43 3-cell).  Looking at the $750 Inspiron, I would definitely say that Dell has stepped up its game since I last bought one of their systems (in 2010).  The new Inspiron looks sleek and they claim improved hinges in the description, which tells me they at least know about the hinge issue (though I still wonder if it’s fully resolved).  However, for me the lack of an optical drive is almost a deal-breaker.  I will definitely always choose a laptop with built-in optical drive as long as it’s affordable.  The battery life is less important to me, but it is really great to have a laptop that I can unplug for hours at a time.  I’m 9 months into my Gazelle Pro, and the battery still lasts up to 4 hours on light use, and 2-3 if I’m doing processor- or graphics-intensive stuff.  My Dell battery was only fully functional for the first year.  During year two I could only leave it unplugged for maybe an hour (if that), and in year three the battery served basically as a backup power supply so that I could unplug the adapter and the move to another room.  Now in year four the system doesn’t even recognize the battery, and dies the instant the plug is removed.

In my original review I mentioned four specific things I wasn’t happy with: the weight of the keys, the sensitivity of the keys, side scrolling functionality, and tap-to-click functionality in the touchpad.  Obviously the keys and the tap-to-click functionality haven’t changed, though I have gotten a bit more used to them.  Last month I switched to two-finger natural scrolling after getting accustomed to it on my work computer, and it’s definitely a lot more natural on the Gazelle than one-finger side-scrolling was.  Lastly I wanted to mention three specific issues a potential commenter on my original post brought up about System76’s Galago UltraPro.  I rejected the comment because I felt that it was vitriolic bordering on libelous, but I don’t like to reject people’s ideas (and I’ve never used the Galago), so I wanted to bring up his issues here.

According to Melvin, the screen, WiFi, battery, and speakers on the Galago are below standard, to the point that he would not recommend the system (and in fact stated that he would not return to System76 as a result of his experience).  I’ll mention my experience with these components on my Gazelle, but ultrabooks in my experience have far more issues that standard size laptops, so I don’t mean any of my “rebuttals” to invalidate his points (and in my opinion I do think that a lot of research is necessary before going with any ultrabook, as they are usually impossible to upgrade and much more difficult to troubleshoot), but I do want to point out, where applicable, my positive experiences with System76 in comparison to his negative experiences.  Melvin mentions that the screen on the Galago is subpar; obviously I can’t comment on the Galago, but on the Gazelle the screen is absolutely beautiful.  The only issue I’ve had is that after a system update I had to reset the color profile.  I also haven’t had any trouble the the WiFi connection on the Gazelle, though in general I have found that wireless adapters can be a bit more finicky on Ubuntu than on Windows.  I see that the Galago has a slightly smaller battery than the Gazelle (53.28 vs. the Gazelle’s 62.16), but they’re both 6-cell, so I’m not sure why the battery would be so bad on the Galago.  As I mentioned, I still get a good 4 hours of life out of my Gazelle after 9 months of constant use.  The speakers are definitely an issue.  The System76 speakers are very, very quiet, to the point that it’s sometimes difficult to make out audio while watching, e.g., a YouTube video.  This hasn’t really been an issue for me because if I’m ever using my laptop for music and video I use my Chromecast, external speakers, or headphones, but I think the frustration with the speakers is definitely a valid issue that I would like to see System76 address.

In conclusion, I would say that a System76 laptop would probably edge out a comparable Dell in benchmark tests, though as I mentioned I don’t have comparable systems at my disposal, so this is just an anecdotal feeling rather than anything basic on meaningful data.  It’s hard for me to be objective about this issue anyway, because I am ideologically opposed to the way Dell and Microsoft do business, and so I take any change I get to spend money on a product that supports open-source software.  Thus, take my opinion with a grain of salt if you like Windows and Dell.  I haven’t bought a Dell in 4 years, but it looks like their current systems are comparable to the System76 (in that the comparable Inspiron I looked at was not quite as good but also a bit cheaper), and I know from experience that installing Ubuntu on a Dell laptop is definitely a viable way to go.  I would say it all comes down to whether or not you need Windows.  If you do, a Dell is probably a better option rather than going with the more experience System76 that you’ll probably have to reformat to install Windows on (remember kids, Windows 7 MUST have the first hard drive partition!).  If you don’t want Windows, I would always recommend System76.


June 8, 2014

As part of my ongoing series of (hopefully informative) posts about basic functions in Ubuntu, I thought I’d talk a little bit about mount. Hard drives and other peripheral devices (well, to be precise, file systems) are connected to a Linux system via the mount command. In Linux this is called “mounting” a drive or partition. If you only have one hard drive, this will already be mounted when you boot up. If you have additional hard drives they’ll either have to be mounted manually after boot or else added to your fstab configuration file (as much as I want to parse that as “F stab”, it stands for “file system table”). Partitions present in fstab will be automatically mounted during boot.

The simple command mount doesn’t actually mount anything, but gives you a list of things that are already mounted. A number of system items will probably show up, but the main things to look for are things like /dev/sda1, which are the kind of partitions that you’ll usually be dealing with. sda is the first hard drive on your system, sdb is the second, sdc is the third, and so on. The numbers following indicate the partition number. If you have only one partition (as many hard drives do), it’ll just have, e.g., sdb1. If you have multiple ones it’ll be sdb1, sdb2, sdb3, etc.

The typical mount command I’ve used is mount -t type dev dir. The -t flag tells it you’re specifying the file system type. For newer Linux systems this will probably be ext4 if it’s a typical data partition (rather than a boot or swap partition). For a Windows system it’ll probably be ntfs. For a USB drive it will probably be vfat, fat32 or possibly fat16. Thus for a standard Linux partition you’d start with mount -t ext4. However, this doesn’t say what to mount or where to mount it. “dev” in the command stands for “device”; you’ll replace that with the partition you want to mount. So if we wanted to mount the first partition on the second hard drive, we would add to the above to create mount -t ext4 /dev/sdb1. Now we have to add the directory (“dir” in the original schema) we’re going to treat the partition as. This can be pretty much anything, but by default things are usually mounted in the “media” folder. So, for instance, if this is a hard drive where you keep all your music (I still love physical CDs, but I have to have everything ripped to my computer as well), you could choose /media/mp3s. So the complete command would be mount -t ext4 /dev/sdb1 /media/mp3s. To unmount something you only have to specify the device, and the command is umount (not a typo, it’s umount rather than unmount; a very easy mistake to make).

Lastly I’ll mention the fstab file again, which automatically mounts drives when you boot the system. I actually don’t use my fstab file much because I like the system to boot with just the main drive; if any problems crop up it’s easier to isolate them. There are GUI systems for editing your fstab file, but a lot of them are lacking, and honestly this is a case where I think it’s actually safer to edit the actual system file than use a front end (note that for many system files the exact opposite is true). It’s hard to do something really catastrophic with your fstab file because all it does is mount drives automatically — in most cases the worst outcome is that something mounts wrong and you have to unmount it, or it fails to mount automatically. However, it’s good to make a backup of your fstab file anyway in case something goes wrong. The first column in the file is the device (e.g., /dev/sdb1), the second is the mount point (the directory you want to mount it to), and the third is the file system type. The fourth column allows you to list options for the device. Typically if you want it to mount automatically and you don’t have any special requirements this column should say “defaults”. The last two columns have to do with system backups and error checking, and in most cases should both be set to 0. Unless it’s your root partition they should NOT be set to 1.