Download my QSL card template, replace my info with yours, and
send to printers. You're QSL-card-good-to-go.
I recently went through the process of creating my own QSL card backs
using Adobe's InDesign, and printing "small" quantities using the 4x6
have no affiliation with overnight prints, they just had an easy to
use web interface and clear measurements for the postcard bleed / safe
For some reason, I was unable to find any pre-existing high resolution
QSL card back templates that were downloadable and customizable.
Problems are always solved if you do it yourself!
Most QSL card printers seem to have minimum runs of around 500 cards,
which is way too many for me, at least with my current station, and
the likelihood of being in the same location. I also wanted a little
variety, and was able to send of 4 different designs at 25 cards each
for around $6.70 per 25.
One important thing to note - these cards were designed to be mailed
in envelopes. there are specific USPS guidelines for mailing postcards
without envelopes. I may update my template design at a future point
The envelope allows more artwork to
come through undamaged however, and I prefer this.
Overnightprints.com is expecting a 6.25"x4.25" set of images (front
and back) at 300dpi. The "safe zone", the zone where your text is
guaranteed to not be trimmed measures 5.875"x3.875".
If you are setting this up in a graphics program, set the bleed
zone to +0.125", and the save zone to -0.125". This is how my InDesgin
template is set up. The final graphics size will be 6" x 4", and you
can see the printed and delivered results here:
Here is the back side of the two card designs:
If you are wanting to use some sort of photograph, or other artwork
for the front of your card, you'll want it to be at minimum 1875x1275
pixels. If it's larger, that's okay, you can scale down, or just move
the image around and crop out the bit that you want. You can see in
the screen shot below that the NYAN cat image is larger than the bleed
zones. When I export the 2 page PDF, it trims this stuff off and only
the 6.25"x4.25" image remains.
At least in InDesign, when you export, you want to make sure you
export using the PDF Print Quality setting, and make sure that you
are using the Document Bleed and Slug settings. This just includes the
extra 0.125" margins around the core 6"x4" image.
The output looks like this:
Once you have this PDF, you can upload it to Overnightprints. It will
give you a preview of the front and back and show you the bleed and
save zone markers. If any of your text is not contained within the
save zone, it's likely to be chopped. If you kept it inside the inner
box of the template I've provided, this should not happen.
I realize that InDesign isn't a free program. I'll post an update soon
with an Inkscape compatible version. I just had a copy of InDesign due
to generous university license arrangements, and it was super easy to
use and set up.
BTW - I didn't create NYAN cat. I'm sure you know this, but I'm
explicitly stating it just in case. (NYAN cat is the pop-tart cat with
the rainbow...) The Reddit "crest" was originally created by /u/licenseplate/.
I'm willing to help anyone with their QSL card layout. Email me. You should
be able to figure out my email address if you've found this blog post!
If you're a redditor, and have all the text, graphics, etc, but no
access to InDesign i'll help you with the whole layout process and
export you a PDF. I'm not a graphics designer, and am not generally
offering my services. Trust me, you'd not want to utilize me in the
graphics design domain! :)
For the past four months, I've been studying to obtain amateur
radio operators licenses at varying class levels. This past Saturday,
1 June 2013, I obtained the highest level license, the Amateur
Extra. I can now legally operate on all frequencies allocated to the Amateur Service.
On 10 March 2013 I passed the Technician level test. A few weeks
prior, KB9JHU lead a "ham-cram", an
all day, learn all the material in a sitting event. I learned quite a
bit, and followed up this instruction with practice test drilling at
HamStudy.org. I've had good friends involved
in ham radio going all the way back to the late 90s, (my teenage
years), but was too entranced with the BBS and computing scene that it never quite
captivated my interest. My neighbor at the time, WA9ALY (SK), spent quite
a bit of time evangelizing ham radio while we conquered various
computing hardware and software issues. I really wish I'd known more
about HF and packet at the time, as it would have been quite
impressive to blast data without infrastructure thousands of miles
away. Despite my lack of uptake at the time, the seed was planted and
my fate was sealed. Thanks to my "XYL", I entered into an amateur radio family.
AB9D has been urging me for the past 5 years to
get my license, and with this post I wanted to share my experiences
and methods of traversing the Technician, General, and Extra class
6 April 2013 marked the passing of the General exam. The extra
followed 2 months later, on 1 June.
Obsessive drilling with hamstudy.org is the
primary method I used to progress through the licensing levels. If
you're currently setting at either Technician or General
levels, becoming a /AG or /AE is only 2 weeks away if you're
steadfast and dedicated.
In my estimations, 1 hour per day for 2 weeks is all it takes to move
to the next level if you're focused for that 1 hour. I probably spent
a bit more time, due to my slightly less-than-disciplined study
attitude, but if you are ready, you'll pass without issues.
The key, in my opinion, is setting a deadline. Let others know you're
going to become a "General" or "Extra" on XYZ date. The social
pressure of being "a failure" may be of assistance depending on your
personality type. Personally, I like deadlines, be they fake or real.
In all honesty, there is really no failure. Take the tests 500 times
if that's what it takes.
If you're consistently passing the practice tests, and you've taken at
least 5 of them, then you're almost certainly going to pass the test
when you take it during the next testing session.
The great thing about Hamstudy.org is that
the guys there are dedicated and constantly improving the site. Email
them if you don't like something or think something should be
different. Chances are you'll get a response, and your request will be
scheduled for implementation.
This is just a quick note about a possible JBoss / HornetQ issue that
may crop up if you're building AMIs from a known working setup, and
plan to use that AMI to fire up multiple worker nodes in cluster.
I've noticed this forever repeating message in my logs on the n>1
nodes in my test clusters:
19:29:52,412 WARN [org.hornetq.core.client] (hornetq-discovery-group-thread-dg-group2) HQ212050: There are more than one servers onthe network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But ifitis logged continuously it means you really do have more than one node onthe same network active concurrently withthe same node id. This could occur if you have a backup node active atthe same timeasits live node. nodeID=8579c60f-cc7b-11e2-8e0f-cd3567e135ae
19:29:52,413 WARN [org.hornetq.core.client] (hornetq-discovery-group-thread-f193984e-cc83-11e2-b66d-cf94e9e2fc58) HQ212050: There are more than one servers onthe network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But ifitis logged continuously it means you really do have more than one node onthe same network active concurrently withthe same node id. This could occur if you have a backup node active atthe same timeasits live node. nodeID=8579c60f-cc7b-11e2-8e0f-cd3567e135ae
I believe the problem is that I had just cloned a working/set-up
machine which left behind
some Id files that are not being updated when the new AMIs come on
My solution to the problem was to remove all the files from the JBoss
AS7 data directory:
I'm sure there is a smaller subset of files you can remove - probably
just the HornetQ files (the messagingbindings and messagingjournal
directories). The directory listing should look something
like this (before you remove them of course!):
note: these writeups assume you've got a working Clojure/JVM
environment, know how to use the basics of AWS (ec2, AMI, S3), and
have played with single-node Immutant
Amazon Web Services + Immutant and Torquebox
Torquebox are completely awesome,
as is the community
support and responsiveness on the Freenode IRC channel #immutant.
After lots of feedback, suggestions, and help, it seemed like it would be
worthwhile to document the setup procedure for this stack running
clustered on AWS.
Out of the box, Immutant is configured to use multicast for node discovery,
AWS does not support this. I wanted a setup that would allow me to
dynamically fire up arbitrary worker nodes (immutant/torquebox) that
would participate in the cluster and register with a fronting load
balancing Apache / mod_cluster instance.
Elastic IPs are limited on AWS, so I wanted to use as few as possible.
In my setup, load balancers get an elastic IP, as do the database master
nodes. Immuntant/Torquebox nodes are created from AMIs I've built in
advance, and use whatever address AWS assigns. These AMIs dynamically
pull configuration from my git repositories and set them selves up
during boot time. I should probably look at
Pallet, but I've just not had enough time. I
ended up cobbling together shell scripts that create and destroy nodes
based on the AMI id.
My Needs/Setup (Overview)
AWS instance running Apache/mod_cluster, with an AWS Elastic IP
An arbitrary number of Immutant/TorqueBox AWS instances participating
in a cluster
A MongoDB instance with an Elastic IP.
Lots of configuration checked into DVCS (git)
Immutant and AWS
I'll start with the most fun and important part first. I'll give my
configurations for Apache/mod_cluster in a follow up post. Most people
will probably be interested in the Immutant on AWS cluster configuration
above all else.
Immutant has excellent tutorials and instructions, so if you're
completely new to Immutant, you'll want to check
To get this working, you'll be editing some XML. Sounds like fun
right? It's not really that bad, but it's XML nonetheless.
standalone-ha.xml is where most of the changes to the stock
configuration will be made. First we'll install Immutant from the
command line, if you've not done so already:
# assumes leiningen, and the lein-immutant plugin lein immutant install
From here we can take a look at the standalone-ha.xml file located
in the ~/.lein/immutant/current/jboss/standalone/configuration/
As part of my deployment process I have this file copied to the appropriate
directory when my AMI comes up. I keep a copy of standalone-ha.xml
in a git repo alongside other configurations and dev-ops type scripts.
You'll have to do something similar if you want to have the ability to
arbitrarily bring up and shut down members of your cluster.
Based on the suggestion
of not binding TorqueBox's public interface to 0.0.0.0, I altered the <interfaces> tag
and changed the public interface sub-element from this:
On my AMIs, eth0 is the internal AWS IP address. Your instances
should have security group settings that allow UDP and TCP
communications as well, but I'll get to that in a follow up post.
Next, we need to configure JGroups to use some method of TCP
communications for broadcast and discovery. The default should look
something like this:
We need to change the default-stack to tcp, and then modify the
TCP stack. I removed the UDP configuration completely, but you can
leave it alone if you want.
MPING will not work on AWS, but thankfully there is TCPPING and S3_PING.
S3_PING is ultimately what you'll want to set up if you
want to be able to add and remove nodes from your cluster without
touching the configuration, but TCPPING is easier to setup and verify, so I'll
cover that first. For more JGroups info, check the
You'll want to replace the above JGroups configuration with
the configuration below:
Of course, change ip.address.node.1 to the address bound to eth0
on your first cluster node, and ip.address.node.2 to the address
bound to eth0 on the second.
Finally, we need to tell HornetQ
to use our JGroups TCP configuration instead of the UDP (which is the default).
At this point if you were to fire up Immutant on each of the nodes
that you configured in the initial_hosts setting of the TCPPING
JGroups configuration, you should see a message in the log file
indicating that one node became the Master, and the other not. You'll
also see a cluster count message:
18:42:57,478 INFO [org.jboss.as.clustering] (MSC service thread 1-1) JBAS010238: Number of cluster members: 2
18:42:57,479 INFO [org.projectodd.polyglot.hasingleton] (MSC service thread 1-1) inquire if we should be master (testapp.clj-hasingleton-global)
18:42:57,480 INFO [org.projectodd.polyglot.hasingleton] (MSC service thread 1-1) Ensuring NOT HASingleton master (testapp.clj-hasingleton-global)
18:42:57,480 INFO [org.projectodd.polyglot.hasingleton] (MSC service thread 1-1) Started HASingletonCoordinator
Above we see the log output of our non-master node. You can find the
full logs in the ~/.lein/immutant/current/jboss/standalone/log directory.
TCPPING is great to at least verify your AWS settings are correct and
that JGroups is working properly. I spent a lot of time with a
non-working S3_PING configuration that really didn't report any
errors, and was not seeing any cluster communications. After many
helpful suggestions from #immutant, I cranked up logging levels, and
eventually started iterating through possible problems. Eventually I
got it working.
S3_PING is great if you want a dynamic AWS environment. No hard-coding
IP addresses at all. You just configure an S3 bucket, and get the AWS
keys for the user who has read/write/list privilege to that bucket.
Since IP addresses can change on AWS, you're really just asking for
trouble if you rely on them. You could of course use Elastic IPs, but
you do not have an unlimited number.
The change is quite simple. In the JGroups subsystem, replace:
I'm pretty sure you have to make the S3 bucket before using the
configuration, so if you see any strange stuff in the logs, double
check your permissions.
Everything should work as before, when you had JGroups set to use
TCPPING. To test, you could create jobs scheduled to run on only one
node of the cluster, send messages to queues and topics, and check
the contents of your distributed caches on the nodes using nrepl.
Jim Crossley put up a really great
a couple of months ago that demonstrate Ruby and Clojure apps
interacting. If you've not played with message queues or polyglot
systems, this is a good place to get started.
In the next AWS / Immutant post, I'll provide details on my mod_cluster /
Apache configuration for load balancing. I'll also talk a bit about my
Rails / Clojure interaction, and how my AMIs are configured to pull
configuration from GitHub.