Safer Email

Safer Email

Today let’s think about how to be safer using the oldest internet application still in common use: email. Email predates the Web by about twenty years. So when young people accuse it of being “for old folks” (meaning, people like me) I have to admit they may have a point. But email is still far and away the best mode of communication for business correspondence, and for the exchange of personal messages longer than 160 characters.

And long before the web, but shortly after the creation of email itself, spam was born. In addition to being annoying, spam can create some information safety issues. So there are two main things I want you to remember when seeing spam in your inbox: use the spam you get to better train your filter, and never click on any links nor open any file attachments.

All modern webmail clients have built-in spam filtering. Personally, I use Gmail to read my mail, even mail from other domains (such as safer-computing.com). The benefit of using an established webmail system as your mail reader is that the provider’s spam filters have been exposed to billions and billions of emails, and so they are very well-tuned for a low rate of both false positives (when the filter puts a valid email in the spam folder) and false negatives (when it delivers actual spam to your inbox). The less of either, the happier you are with the result.

You train spam filters by identifying both false positives and false negatives for it. For example, in Gmail, there is a “Report Spam” menu option or button in every non-spam folder and a “Not Spam” button in the spam folder. You should make use of these whenever possible. That means occasionally visiting the spam folder to look for those false positives. The more you do this, the less it will be necessary – because the filters adjust their criteria better to the kind of email you get and even to your subjective tastes about what is and is not spam.

One notable subset of spam you always want to be excluded from are the scams. Disney vacations, prizes in lotteries (that you don’t remember entering), gift cards and many more unbelievable windfalls show up in your mailbox by the hundreds each month. As you no doubt know, these are nothing but scams to get your personal information or attempt to extract redemption fees to claim these imaginary prizes. Mark them all as spam.

And of course, there really is no dead Nigerian prince whose family lawyer wants to pay you 20% of $1.6 billion to help them expatriate the money. The only thing that you will get for responding to these is an escalating series of demands for fees to cover the assorted (made-up) mechanics of moving the (imaginary) money and finally (never) paying you. Sending these emails is a crime, and you can report it to the FBI at https://www.ic3.gov/complaint/

Phinally, phishing. Phishing is the sending of emails carefully crafted to look like they come from a legitimate organization, such as a bank, a government agency like Social Security or the IRS, or an employer. The typical phishing email will have a message designed to create some sense of urgency, and links crafted to resemble the links to the legitimate website being spoofed. For example, the email may alert you to a credit card fraud attempt, and the links embedded go to chasebank.com (for example). The problem here is, Chase Bank’s website is really at chase.com. When you go to chasebank.com, which was created by the scammers, you will indeed find the familiar login screen and so on. When you log in through this screen, you will land on the familiar opening screen of chase.com. However, because you logged in through the scammers’ fake page, they’ve snagged a copy of your ID and password in the process. It is easy to do that and then pass your valid credentials along to the real site, so your experience is the same as usual. The fake login page looks very real because the scammers can easily go to the public pages of the real chase.com and grab copies of all the graphics, fonts, content, style sheets and even a fair amount of the programming code needed to make certain pages look and work the way the real ones do. The result is a presentation that even professionals will have a hard time distinguishing from the real thing. It sounds like a lot of work but it pays very well. One single phishing attack in April netted $495K from a Michigan investment firm. And any given phishing email can go to millions of users at a time.

The lesson here is, never click on links in emails, unless the senders are personally known to you, or for things like password resets that you know you initiated within the past few minutes. Certainly, for financial and government services, you should navigate to their websites by way of known links you have previously saved as bookmarks or stored in secure password-manager records. If you use a search engine to make initial contact with an agency or company, make sure that you skip past the sponsored links and click only on the most relevant non-sponsored one. Phishing emails, like all scams, should be reported to the FBI at https://www.ic3.gov/complaint/.

Whether it’s spam or phishing when an email arrives that “wants” you to click on its links, leave it wanting. Especially, never click on “unsubscribe” links in spam email. Doing that simply confirms for the spammers not only is your email address valid, but you actually read their email. They will reward this by showering you with much love. And spam. Well, mostly spam.

 

A Modeling Job for You

A Modeling Job for You

.

Motherboard, a part of Vice magazine, has published a very good Guide to Not Getting Hacked.  It’s also available as a PDF.

One of my favorite sections draws from the EFF Threat Modeling page.  “Threat modeling” may sound like something a management consultant would explain to you with 19 PowerPoint slides for only $45,000.  But it really just consists of considering these five questions:

  1. What do I want to protect?
  2. Who do I want to protect it from?
  3. How bad are the consequences if I fail?
  4. How likely is it that I will need to protect it?
  5. How much trouble am I willing to go through to try to prevent potential consequences?

Ultimately the goal of information security is not to protect the information assets absolutely.  Protecting anything absolutely is not even theoretically possible.  What we’re trying to do here is, make the information assets more trouble to attack successfully than they’re worth.  If stealing a new sprocket design from the engineers at Spacely Sprockets is worth $4 million, then we have to make it cost an expected $4.5 million or more to get.  That way, even success is failure for the attacker.

But if preserving that design is worth $4 million to us, we’d be idiots to spend $4.5 million defending it.  We could post it on Facebook and save ourselves $500,000.

Threat modeling is really just taking a breath, refusing to panic, and applying all-too-UNcommon sense.

What’s Missing?

What’s Missing?

What’s missing from this pretty-good article?  Give it a read, but the TL;DR is that a NY Times cyber-security writer tells us what she does to make herself safer online.

It includes everything I do, and a few things I don’t.  But there’s one crucial item missing.

Ad-Blocking.

It’s not hard to figure out why ad-blocking is left out of a NY Times online article.  But I will say that until the publications who pay for it exert some pressure on the ad networks to clean up their act, I will continue to block ads 100%.

If they refuse to let me visit, I will gladly go elsewhere.

I predict that the publications will never do this, because the cost of ad-borne malware is a complete externality to them.  They never feel the tiniest pinch.  They leave that to us.

 

Internet of Crap

Internet of Crap

Welcome to the wonderful world of the Internet of Things. You’ve probably seen this term in the news a bit lately. Perhaps you read about it in connection with a massive botnet called Mirai, or it’s even more potent descendant, IoT_reaper.

The term Internet of Things (IoT), refers to items – other than computers, tablets or mobile phones – that are connected to the Internet and communicate back to their manufacturers or distributors. A prime example of this is, printers and copiers that provide supplies consumption and problem diagnostic data back to the manufacturer. This allows service calls and supply replenishment to arrive with minimal delays in production. A great benefit, to be sure.

The problem arises when large numbers of consumer devices start using this same capability, but without much in the way of careful design or attention to the possible security compromises. A buyer of a $1,500,000 production printer may safely assume that some attention has been given to this issue by the manufacturer. They also know that $1.5M worth of business gives them quite a bit of leverage to press the manufacturer to fix it if something is wrong. But a buyer of a $20 “smart” light bulb has neither of these safety factors. For $20, you get what you get.

As more low-cost consumer devices all start turning up with internet capability, we start to see some very odd ideas expressed in this technology. Late in 2015, we learned about a vulnerability in Samsung refrigerators that exposed customers’ GMail logins (including passwords) to cyber-criminals. Many people had questions about this. “How could this happen?” “Have they fixed the problem?” My question was, “WTF were REFRIGERATORS doing with GMail logins?”  This illustrates the first principle of IoT security

  • 1st Principle of IoT security: Don’t give your devices information they don’t need. Think about what could be the impact, when information you give to something like a refrigerator is leaked to cyber-criminals. If a device works and does what you want despite the fact it’s still asking for some information, drop the matter. Its feelings won’t be hurt; it has no feelings.

As I have said a number of times in this space, the essence of security is not absolute, but relative safety. Make trade-offs intelligently between risks and benefits.

When I get a new device, one of first things I do is assess what I will gain by connecting it to my network and to the internet, vs. what might be at risk if the device’s security is not up to snuff. Most of the time, my conclusion is, “don’t connect it at all” or “connect it to the home network but keep it off the internet.” If your router has a parental controls feature, where you can restrict your kid from getting online, you can also use that to restrict your fridge from getting online. Most devices’ main reason for being connected to the Internet is to feed data back to its manufacturer that can — at the most benign end of the spectrum — be used for marketing purposes.  Consider that when assessing the risk side of this question.

  • 2nd Principle of IoT security: Don’t allow devices to connect directly to the Internet or the rest of your home network unless necessary.  Figure out what you’re really giving up if you don’t connect the device. And if the answer is, “not much”? Don’t plug in the wired connection, don’t give it the WiFi password, just say no.

Brian Krebs is an information security researcher (hacker!), with a blog that is very popular in our field. He does a lot of independent investigation of cyber-criminals, and as a result he often draws their ire. He has had heroin shipped to his door, and they have spoofed phone calls to police that result in the SWAT team being dispatched for the non-existent “hostage situation.”

Last fall, Krebs’ blog website was attacked by the largest denial-of-service that had ever been seen to that point: a botnet directed over 660 gigabits/second of bogus traffic at his server. For comparison, the fastest connection available from Time-Warner in Rochester is 50 megabits/second, so this was larger by a factor of 13,200. All of that focused on a single web site will disable the servers just because of the volume.

Upon investigation, the source traffic was found to have been infuriatingly simple. The attackers had just scoured the internet for connected IoT devices and checked them to see if they still used the manufacturer’s default username and password to allow remote access. They were able to find millions that did, mostly CCTV cameras and cheap routers. Those were harnessed by the criminals to start sending Krebs a synchronized tidal wave of garbage network traffic. It’s tempting to say they were “hacked” but they weren’t, really. Their owners had offered them to the public with the documented default logins, effectively free to use for all comers.

  • 3rd Principle of IoT security: Change the default username and password. If the install process forced users of all new devices to choose any non-default username and password, that alone might have been sufficient to stop the attack on Krebs.

So to recap: our three principle of IoT Security are:

  • Don’t give your devices information they don’t need.
  • Don’t allow devices to connect directly to the Internet or the rest of your home network unless necessary.
  • Change the default username and password.

Yes, there are problems in IoT security, and we’re going to need the manufacturers to address poor designs and worse implementations. But by applying these three principles, we can reduce the impact on our own lives, so that we still get some benefit from these modern things.

 

Hacker ≠ Criminal!

Hacker ≠ Criminal!

Whenever a news story breaks about information security (usually a radically bad FAILURE thereof) then “security researchers” or “consultants” get trotted out by the media to give expert soundbites.  David Kennedy was a keynote speaker at the recently-concluded Rochester Security Summit, so he’ll do for my example:

TrustedSec’s David Kennedy on CNN from David Kennedy on Vimeo.

David is a security researcher – which means he’s a hacker.  No, I did not just accuse him of a crime.  He’s a wonderful guy and I would totally invite him to dinner.

The media have abused the the term “hacker” for years now.  The original meaning of the word was simply, “One who is expert at programming and solving problems with a computer.”  That expertise, together with an insatiable curiosity driving one to exercise it, is what genuinely makes a hacker.

Cyber-criminals may or may not be hackers.  For example, if they wish to crack their way into some company in order to plunder its money or sensitive info, they might exercise their own high levels of technical skill.  But they might hire technical capability, and not exercise it themselves.  Or they might be what we call script-kiddies, people who find easy step-by-step recipes for creating digital mayhem, and use them to good effect against poorly secured targets.  They might not even be criminals: they might be state-sponsored, and thus their actions are legal.  At least under their nation’s laws.

But hacking is a set of problem-solving approaches, and a toolbox of techniques.  It’s a way to accomplish a goal, and the goal’s goodness or badness is not relevant.  Hacking is morally neutral.  If, and only if, the goal of the hacking is a crime, then a hacker also happens to be a criminal.

Security researchers (like David) are employed to find ways that our information systems can be exploited.  They might do malware reverse-engineering, or vulnerability discovery and analysis, or refinisng social engineering techniques.  Most of our companies don’t employ them: it’s too specialized.  Large providers and specialty firms (Verizon, FireEye) provide researcher talent, and we consume the output in the form of reports and alerts.

Independent researchers also work as consultants.  They may help companies figure out what happened after an attack, or they may routinely provide bug reports to manufacturers.  They may work on Red/Blue team exercises, where attacks are simulated and defenses are tested.  Without question, Security Researchers are hackers.  If they aren’t, they cannot function in that job.

He’s not a criminal, he’s just cold! 
Time to Go!

Time to Go!

Where?  To the Rochester Security Summit of course! It kicks off tomorrow for two days of security geeking-out.  I am looking forward to it plenty.  My talk is on Friday at 2PM about full and responsible disclosure of bugs, bug bounties and so on.

This weekend I will make a post here, covering that topic.

Why I Block Ads. Everywhere.

Why I Block Ads. Everywhere.

Advertising supports a lot of the content you enjoy on the Internet.  The economics of it should be simple.  An advertiser pays a certain amount to get a commercial message in front of many readers or viewers.  Some percentage of those viewers make a purchase.  When enough revenue comes back to the advertiser, the ad is a good investment: returning more in margin to the business than it cost to produce and place.  In practice it’s a lot more complex than I state here, but the backbone of advertising remains just that simple.

This simple idea has recently started to create problems of the sort that show up in the Safer Computing inbox.  Advertisers realized that a digital advertising message can be a lot more than a picture with words or a short film to watch.  This means you can experience web pages with ads that are mini-games, ads that follow you around a page as you scroll, ads that follow you from page to page as you browse, and more.  

You may also be aware that ads make and store all sorts of inferences about you — inferences they gather from what goes on in your browser and on the rest of your computer.  These inferred personal profiles are scooped up by data brokers and packaged to be resold to other marketers.  That’s supposed to be done in enough volume to make each individual profile impossible to identify.  But recent research has shown that, with so many different data points being collected, working backward from a large “anonymized” data set to reliably identifying individuals is far easier than anyone suspected.  Yet, without enough different data points, the package is not attractive to marketers.  It will not find a buyer.

Another very disturbing trend in advertising is the enormous number of computer virus and Trojan infections that the ad networks now make possible.  Remember that the ads are more than just pictures or films, they have all kinds of sparkly interactive features.  They dance, they sing, they explore the bleeding of edge of being so annoying that you want to throw the computer out the window and go for a walk instead.  And how do they accomplish these things?  

Every one of those ads is a small program that you have half-consciously invited to run on your computer.  Your browser was instructed to bring these programs along with the content you wanted to see.  The intent of these programs appears to be delivery of a commercial message — but other functions are often hidden there.  Viruses delivered within web ads have infected hundreds of millions of computers around the world with everything from botnet spam clients to ransomware.  The websites that deliver these ads don’t often know what they are sending out; they simply allow ad networks to deliver whatever they like within broad guidelines and accept the payments for what is passed along.  The networks that aggregate and place these ads do not have the resources to check out all the ads they deliver, from what may be thousands of sources.  What’s worse, they don’t have the incentive.  With enough layers of middlemen, there’s nowhere for liability to land.

With all that to consider, I decided a while ago that I would block ads everywhere I could.  There are two counter-arguments to blocking ads I did consider.  One is, how will I support the websites whose content I am enjoying?  Simple: I actually become a paid member or supporter of any sites I read frequently enough.  Some sites I visit for the first time, say they won’t serve me content unless I disable my ad-blocker.  Fair enough, I say, and click away to find a similar item elsewhere.  

The other counter-argument is, how will I learn of cool new products or services I might want to try?  Since I was never one to find such things through ads, I consider this a small loss if any.  But the truth is, I check out new things that are any larger than tiny impulse buys at recommendation sites like Wirecutter, Sweet Home or Consumer Reports.  I prefer unbiased comparative reviews to advertising content, for decisions to purchase.

My current ad-blocker of choice is uBlock Origin by Raymond Hill.  It’s a very low-profile browser add-on for Firefox, Chrome or Opera. I say “current” because my choice has changed a few times recently.  Other ad-blocker providers have gradually been seduced by money and become ad networks in themselves, serving what they call “safe” or “white-listed” ads.  Their users have had varying levels of choice about this, from “a little” to “none.”  With uBlock Origin, so far so good.  If things change, I will add an updated recommendation in this space.

This article first appeared in The Empty Closet.

3-2-1 Backup

3-2-1 Backup

Backup is the most basic information security measure.  Whatever else happens, your worst-case, baseline fall back is: restore from a backup and get back to work.  So you always want to make sure your backups are rock-solid.  A rule of thumb for how to ensure that is easily remembered as, 3-2-1.

3-2-1 backup means that you should:

  • Have 3 copies of your data (minimum)
  • Keep backups on at least 2 different media
  • Store at least 1 backup offsite

So you can see that this is not as hard or as involved as it might seem, I can give you an example from real life — from my own desk, my own PC.  I had been using CrashPlan Home for all backups here, but they just announced that the entire Home edition of the product is shutting down over the next year.  The deadline they have given me to get off is mid-January of 2018.

It’s true, I have two things that some home users do not: a second hard disk in my PC and a file server.  But the same effect can be had for anyone with, say, a large USB drive and a network disk like a Seagate Central.  The other thing I need, and that you’ll need, is a cloud storage service.

Backup #1: goes to my second hard disk.  There are many hazards backups protect against.  Probably the most commonly realized one is what we call PEBKAC.  That means, Problem Exists Between Keyboard And Chair.  In other words, this one is for when I am an idiot.  It will not protect me against hardware failure (unless that miraculously spares the one disk drive).  So, in that case, I move on to…

Backup #2: my file server.  This one will be OK even if my entire PC fails.  It’s also the one that I encrypt, because it’s also the source for a file-sync routine that goes to…

Backup #3: my cloud storage provider.  This is the one I will have to count on if the house burns down.  To do this, I chose a storage service that, like DropBox, does a continuous synchronization as its contents are updated.  Once primed, it will update every time the source backup updates.  I selected pCloud for this, because the yearly price for 2TB of storage was the most competitive, while still supporting the essential sync function.

Because I don’t trust the encryption at the file storage service alone, I am using a backup software that provides local encryption.  For the software, I chose Duplicati.  It’s simple, it’s free (but make a donation, if you can!) and it’s open-source.  It also supports a vast array of cloud storage providers, so if I want to switch in the future, I will probably be covered.

3-2-1: make sure you can get a working copy of your data if you need to.  Somewhere!

 

Cloudy With a Chance of Information Security

Cloudy With a Chance of Information Security

The Cloud! It sounds so… ethereal. We’re all going to have computers floating around in the air? What’s going on here, really? Today, let’s look at data storage “in the cloud” and how we can use it more safely.

A sticker on my laptop says, “There is no Cloud. It’s just someone else’s computer.” At its most basic, that’s what we mean when we talk about “the Cloud” for any computing or data storage need. We can host the website on a server we buy and maintain, or we can pay someone to host it on their server. We can store our photos and music on disks we buy, connected to computers we own, or we can pay someone to store them for us. When we pay for the service in money or personal info or both, then we’re users of “the Cloud.” If you keep music, video, pictures or documents in Google Drive, DropBox, SpiderOak, OneDrive or iCloud, you’re a cloud user. If you host a website on SquareSpace, Weebly, GoDaddy or any similar services, you’re also a cloud user.

Of course, the fact that it’s someone else’s computer means that we don’t have as much control as we might over how the data we store there gets handled. This is where the security considerations require more thought. Every cloud service will tell you how secure they are. Every one will tell you about their use of encryption. Encryption matters, a lot. But what matters more is a careful consideration of the “What-Ifs”. It’s what we securty guys call “threat modeling.” You have to imagine the ways in which your information could get compromised, and see if the security measure in place actually protect against the threats you care about. So when DropBox tells me that they have strong encryption I have to think, what is encrypted, and how are the keys handled? When I poke a little further, I learn that they encrypt the data I send there “in transit” and “at rest.”

“In transit” means, when I send the data from my computer to Dropbox’s, it travels over an encrypted connection. That’s good. But my “what-ifs” didn’t seriously include, “What if someone eavesdrops on my network connection while I upload the file?” What I did wonder was, “What if someone hacks access to Dropbox’s data center and can go wandering around on their servers, looking at stuff?” The fact that my data arrived there safely last week doesn’t help me now, does it? So now I consider the fact that they also do “at rest” encryption. That means the data is encrypted while stored on their disks waiting to be retrieved. OK, that’s pretty good. But then one more thing bugs me: DropBox controls the keys needed to open those encrypted files and retrieve them in their original state. If those files are my tax returns, or sexy shots of my lover, I certainly don’t want anyone with access to the keys to be able to look at that! Yet, in this hacker-in-the-DropBox-servers scenario, that is exactly what becomes possible, because the same baddies who can get to my at-rest data can also probably get to those keys.

When I decided to use DropBox (or any of the similar services), I considered these kinds of things. A compromise I made when I decided to go ahead and use their service was, accepting that the data I stored there would indeed be vulnerable to this kind of threat. I also knew I had two ways to mitigate the risk, and I use a combination of both. The first and most important is, I am simply cautious about what I put in there. I put things there that I want to share, that I want available from my mobile devices, and that I don’t care that strongly if they were disclosed. No tax returns, and no cheesecake shots of my sweetie. Yes to pictures of my cats, social media memes or raw materials for blog posts.

The other mitigation is what I apply to the few things that do need protection but also need to be more widely available: I add my own encryption. If you think of encryption as a secure box to which you hold the key, then you’ll see why this helps. I encrypt my secret data — I put it in a box and lock it. Then I send it to DropBox. DropBox gets a file from me, encrypts it with their key, and stores it. Now, it’s a box within a box. If someone hacks DropBox’s data center, they can open the box locked with DropBox’s key only. When they get to what’s inside, it’s still locked with my key. And I never send that to DropBox, so my secrets are safe.

Encryption is a lock. Who holds the key, that’s what really matters.

The easiest way to add your own encryption to a file or several is to use one of the widely available utilities that create “Zip” or similar archives out of files or batches of them. All of these, in their latest versions, have the option to encrypt the resulting archive with a very strong and reliable system called AES – Advanced Encryption Standard. Just make sure you create a good strong password or phrase (as I wrote about here). And record that passphrase anywhere but in the cloud service where you store the resulting archive.

 

Your Passwords Suck

Your Passwords Suck

So do mine.  What can you say?  Maybe I should write that, p@5SW0rdz?  It doesn’t matter.  We all use passwords.  It’s the simplest and most popular method systems and sites have to authenticate us.  But let’s face it, passwords suck.  There are lots of problems with how we use passwords, and my aim today is to help sort some of those out.

The main thing you need to know about passwords is that they are typically not used well enough to secure much of anything, because humans have certain mental patterns that are difficult to break out of.  One is that we will tend to choose, as “secret words” that we know we need to remember, things that have a particular meaning to us.  A child’s name, a wedding anniversary, a favorite sports team.  The advantage is that these things don’t change, so we can reliably remember them.  But that is also a huge disadvantage, especially since we make it easy for anyone to learn these things about us, via social media.

A common strategy used to attack password security is brute force: just guess all the possible passwords until you get a match.  Once an attacker knows your kids’ names, your milestone dates, your favorite teams or bands, the range of things they have to guess just got a lot smaller, so getting that match just got a lot easier.  Almost as easy for an attacker, is when your passwords are not based on your life, but still are real words.  Now we have a refinement to brute-force guessing: the “dictionary attack”.  This can reduce finding a password using modern computing equipment to only seconds, instead of hours or days.  And it’s usable even if you take your favorite fruit, say, “pineapple”, and cleverly change it to “p1N3Appl3”.  Dictionary-attack software takes all those transformations into account, and it’s only slowed down by a few heartbeats.

also: this. don’t do this. 

There’s another habit we have as humans that makes life easier for criminals; we reuse passwords.  Having more occasions to type in a given password makes sure we are likelier to remember it, doesn’t it?  Well, all this means to a criminal is that once they figure it out for one site, they have it for everywhere we go.  Now, even as hard as it is to remember a single good password, here’s that mean old Safer Computing blogger telling you to make up a new and different one for every site.  This is ridiculous!  You can’t do this!  Heck, Safer Computing can’t!  Nobody can….

Nor should they.  No, the human brain is not up to making or remembering good passwords.  Because p@5SW0rd is a pretty lousy one, and so is p1N3Appl3.  A good password is actually something like Kg52k$hm^YG@yuR%WD.  But I don’t want to type that, and I don’t want to have to remember it.  Lucky for me, I don’t have to.  There are a number of good password managers out there, which are systems that create, set and use good complex passwords for you, without giving you the headache of dealing with strings like Kg52k$hm^YG@yuR%WD.  The one I would recommend from my current tool kit is LastPass (https://www.lastpass.com/).  It integrates into your browser so you can let it automatically log into sites for you.  When you’re signing up for some new service, it (usually) detects that and offers to generate a gnarly unguessable password for you.  And if you load your current set of passwords into its database, it will offer to fix problems like weaker passwords and duplication.  All in all I have been happy enough with it to upgrade to the paid version for several years now.  But start with the free version, it’s got more than enough power for most folks.  If you want to try something else, try taking a look at 1Password (https://1password.com/) or for a stand-alone program instead of a web-based database, try KeePass2 (http://keepass.info/).  I have no affiliation to any of these products.  

Finally, let’s talk about ways to make your passwords less important (they suck, remember?).  The best way to do this is to add a second factor to your authentication on anything important.  If the password is the only thing you need to get into a service, then having that password compromised is a disaster.  But if getting in to, say, your GMail requires both a password and the code for GMail on the Authenticator app in your phone, then losing only one of those is much more like annoying rather than disastrous.  Any important website (email, social media, banking, stock trading, etc.) that offers two-factor authentication, you should absolutely accept that offer and set it up.  The second factor will often be tied to your phone, but that’s actually just about ideal.  You already have it, and it’s something you have that a crook who just guessed a password does not have.  This makes everyone safer (crooks excepted).

If an important site you use does not offer two-factor authentication, ask them some questions: Why not?  When WILL they offer it?  and of course, How do I transfer my account to a competitor who DOES offer it?