I was recency debugging an issue on an app. I had my test setup and
properly failing. I was debugging and narrowing it down to the correct
file. I wasn't getting an error or exception, so the specific line
number wasn't known. I just knew where the general error was.
I scanned the file, only about 15 lines. I was looping through to some
form fields and needed a counter like below:
-ifjob_skill.id.blank?-key="key#{id}"-id=+1
If you paid attention to the title or have a sharp eye, you might have
noticed the slight difference:
=+ != +=
I would have thought that =+ would cause some sort of exception, but
it doesn't:
I got a call from a client that reported a strange notification from
Google Chrome:
It's a feature/function of Google Chrome that will try to guess what language the page is in and then match it to the user's default language. So for example, if an English speaker went to an obvious French site, the same thing would pop down and say the page was in French, would you like to translate it?
What's surprising is that apparently there's a language called Tagalog (did you know??), but, what's confusing is why Chrome thinks this page is written in Tagalog. I hid most of the page, but it just a few simple sentences and some table data.
This is really an issue with Chrome, but after doing a little research, I found I can add some meta tags in to "hint" to Chrome that the site is in English.
I found this page on SO that discusses the same problem. I'm going to try and add these meta tags in:
A few nights ago, I had a rails app running really slow. Users were
complaining and sending support tickets to my client. Not my favorite
message to get at 11pm!
For reference, it is a rails app, running on unicorn and nginx all on
Ubuntu 10.1
One of the first things I did was ssh into the server and ran $ top
which will give you a brief overview of the system vitals like memory usage,
server load, and top processes. The server load is usually most useful
to me. I'm not sure how it works but it's a scale starting with 0. 0-1
and your server is basically idle. 1-3 and your server is humming along
fine. When you start getting over 5, your server is probably running
noticeably slow. Well, I've seen this number creep up to 15-20 on other
servers in the past, this night I was only at a 4-5 range.
The next thing I did was to take a look at the logs. I did an $ ll -h
to see what was going on in the direct and got this:
total 1.1G
drwxrwxr-x 2 user group 4.0K 2014-01-23 06:25 ./
drwxrwxr-x 12 user group 4.0K 2012-08-20 08:07 ../
-rw-r--r-- 1 user group 1.1G 2014-01-22 06:25 production.log
-rw-r--r-- 1 user group 17M 2014-01-22 12:22 unicorn.log
(*I changed the real user/group name with generics to protect my
client's and server's idenity)
Whoops, I don't think that log should be quite that large. Each time
rails has to write to the log, it is having to deal with a 1G file. I'd
actually manually rotated this log, but it was now time to find
something more automated.
After checking around a bit, I found this article on stackoverflow
talking about a command that I assume comes with most Ubuntu
installations called autorotate.
It's actually easy to setup. I just created a file in the
/etc/logrotate.d/ directory. This is a config file for autorotate.
Mine looked like this:
# /etc/logrotate.d/app_name
# Rotate Rails application logs based on file size
# Rotate log if file greater than 20 MB
/home/user/apps/app_name/current/log/*.log {
size=20M
missingok
rotate 52
compress
delaycompress
notifempty
copytruncate
}
So it just basically rotates the log when it gets to 20M.
One thing I was unsure of was how the new config would take effect. Did
I need to restart the logrotate servce or add the config? It seems that
logrotate is cron job run by the system, so you don't need to do
anything. It'll just start working.
The other question I had was, where do find the rotated log files? In
my case, they were placed in the same directory as the main log file,
which was fine because it was sym linked to a 'shared' directory in
within my rails app setup.
Now my log direct looks like this after a few days:
total 61.5M
drwxrwxr-x 2 user group 4.0K 2014-01-23 06:25 ./
drwxrwxr-x 12 user group 4.0K 2012-08-20 08:07 ../
-rw-r--r-- 1 user group 135K 2014-01-23 09:28 production.log
-rw-r--r-- 1 user group 43M 2014-01-23 06:25 production.log.1
-rw-r--r-- 1 user group 1.5M 2014-01-22 06:25 production.log.2.gz
-rw-r--r-- 1 user group 17M 2014-01-22 12:22 unicorn.log
When I rake the migration rake db:migrate I got the following error:
-- add_index(:private_label_plan_assignments, :private_label_account_id)
rake aborted!
An error has occurred, this and all later migrations canceled:
Index name
'index_private_label_plan_assignments_on_private_label_account_id' on
table 'private_label_plan_assignments' is too long; the limit is 63
characters/Users/jess/Dropbox/websites/gypsi/gypsi-web/.bundle/gems/ruby/1.9.1/gems/activerecord-3.2.13/lib/active_record/connection_adapters/abstract/schema_statements.rb:573:in
`add_index_options''`
I remembered vaguely running into this issue before, but couldn't
remember what the problem was, so I had do to the research all over
again. Luckily, I found it rather quickly, but I wanted to write it
down somewhere so I could either remember it next time or better
reference it. Maybe you'll find it helpful too.
The issue is the add_index method in rails automatically creates an
index name in the database. However, the database has it's own limits
and in this case, 63 characters is the limit for an index name.
The simple solution is to manually name the index and luckily rails
provides that option in the add_index method call. So, just change
your migration to have a manual name like so:
When I first started my company in 2008, my backup plan was simple: I
had an external hard drive and I just backed up to it once a week.
After I while that started to become too much of a burden, and I had
also setup a Ubuntu dev server that I wanted to try backing client work
up to. So I followed this popular article about setting up TimeMachine
on Ubuntu
and it worked great. Then Rack Space bought a little company called
Jungle Disk and they had an option for Unbuntu to backup files to S3 or
Rackspace's cloud files for an off site solution.
This worked really well because I had everything in one place and lots
of redundancy. I had a 2nd hard drive in the dev server that I rsynced
to. I also had another computer in the office that I rsynced to, and I
had the offsite backup in case of fire, theft, flood, etc.
What changed?
In 2009, I really started to get into rails development which works
best when you have your files locally. You run a local web server,
local database server, Git (git is slow over a network for large
projects) and things just work better in my local env. Also, I started
using Dropbox a lot and enjoyed how easy it was to share files/folders
and also came with the convenience of backing up your files
automatically in the background. Around that time I also ordered a new
dev server with a RAID setup to mirror the data.
This is when things started to get a little off. After setting up my
new server, I never resetup JungleDisk. We started putting all
new client directories in Dropbox. We ended up with having a lot of old
client data on the server, some internal applications (ie billing/time
tracking) and lots of old resources (images, videos, etc).
I've had it on my todo list for sometime now to find an offsite solution
for my dev server, and I've looked a few times, but just haven't found
anything that seemed to fit for what I wanted.
No OS Found
My wife and I take turns each day exercising. Once day she does
CrossFit and the next I ride my bike. On Friday's (her day) the
CrossFit class is a later in the morning and I usually take some time to
hang out with my 3 year old Nate. We were about to go and see my 93 year old
grandfather and I was trying to get a deposit ready in the
office when Nate came in and simply pushed the power button on my
battery backup for the dev server.
I got a little flustered, ushered him out and didn't think much of it
(thinking it would reboot itself as it has before after long power
outages). Later that afternoon, I went to enter so time in my time
tracking app and it wouldn't come up. That's when I went to the server
turned the monitor on and saw No OS Found.
I instantly started to get a little nervous. I rebooted and rebooted
several time and tried going into the setup and RAID setup and had no
luck.
This is when the regret started puring in and I started kicking myself
in the rear. How difficult would it have been to just have a simple
local copy on one of the other 5-6 machines I have around the office?
It wouldn't have taken any more time than to write this article.
The Outcome
I was fortunate that I hadn't lost any client work. I did have a few
old client directories that had some graphic source files, but the
biggest thing was a months worth of billing / time tracking. How could
I ever I recreate that?
This happened on a Friday night, so I couldn't take it immediately
anywhere. I was going to wait until Monday, but my anxiety got the
best of me and I took it to a place that had done good work for me
before, but after taking a quick look, they said it'd be Monday before
then had any answers.
Well, this made for a really crumby weekend.
Finally on Monday (afternoon) I got the answer I'd been looking for: my
data was safe. There was actually nothing wrong with the hard drives.
If we turned off the RAID controller in the BIOS, you could boot the
drives fine (so they were perfectly mirrored), but something got messed
up with the RAID configuration and it was no longer recognized. You
have to erase the HD's to redo the configuration, so I immediately picked
up the computer to make backups!
The Takeway
Don't get lazy and slack about your backups...it's easy to do over time,
especially if you haven't had a scare in a while. Also, with all the
focus on the cloud (Dropbox, Google Apps, S3, etc), it's easy to
distance yourself and be naive about how covered you are. I incorrectly
felt way too comfortable about the RAID setup and figured I had good
coverage and I was very close to being WAY wrong.
Take backing up seriously and don't just rely on one source and/or
service.