QSL Cards de NG9i

TL;DR

Download my QSL card template, replace my info with yours, and send to printers. You're QSL-card-good-to-go.

Problem

I recently went through the process of creating my own QSL card backs using Adobe's InDesign, and printing "small" quantities using the 4x6 postcards at OvernightPrints.com. I have no affiliation with overnight prints, they just had an easy to use web interface and clear measurements for the postcard bleed / safe requirements.

For some reason, I was unable to find any pre-existing high resolution QSL card back templates that were downloadable and customizable. Problems are always solved if you do it yourself!

Printers

Most QSL card printers seem to have minimum runs of around 500 cards, which is way too many for me, at least with my current station, and the likelihood of being in the same location. I also wanted a little variety, and was able to send of 4 different designs at 25 cards each for around $6.70 per 25.

One important thing to note - these cards were designed to be mailed in envelopes. there are specific USPS guidelines for mailing postcards without envelopes. I may update my template design at a future point to follow these suggestions. The envelope allows more artwork to come through undamaged however, and I prefer this.

Overnightprints.com is expecting a 6.25"x4.25" set of images (front and back) at 300dpi. The "safe zone", the zone where your text is guaranteed to not be trimmed measures 5.875"x3.875".

If you are setting this up in a graphics program, set the bleed zone to +0.125", and the save zone to -0.125". This is how my InDesgin template is set up. The final graphics size will be 6" x 4", and you can see the printed and delivered results here:

Here is the back side of the two card designs:

If you are wanting to use some sort of photograph, or other artwork for the front of your card, you'll want it to be at minimum 1875x1275 pixels. If it's larger, that's okay, you can scale down, or just move the image around and crop out the bit that you want. You can see in the screen shot below that the NYAN cat image is larger than the bleed zones. When I export the 2 page PDF, it trims this stuff off and only the 6.25"x4.25" image remains.

At least in InDesign, when you export, you want to make sure you export using the PDF Print Quality setting, and make sure that you are using the Document Bleed and Slug settings. This just includes the extra 0.125" margins around the core 6"x4" image.

The output looks like this:

Once you have this PDF, you can upload it to Overnightprints. It will give you a preview of the front and back and show you the bleed and save zone markers. If any of your text is not contained within the save zone, it's likely to be chopped. If you kept it inside the inner box of the template I've provided, this should not happen.

Open Access

I realize that InDesign isn't a free program. I'll post an update soon with an Inkscape compatible version. I just had a copy of InDesign due to generous university license arrangements, and it was super easy to use and set up.

Files

Conclusion

BTW - I didn't create NYAN cat. I'm sure you know this, but I'm explicitly stating it just in case. (NYAN cat is the pop-tart cat with the rainbow...) The Reddit "crest" was originally created by /u/licenseplate/.

I'm willing to help anyone with their QSL card layout. Email me. You should be able to figure out my email address if you've found this blog post! If you're a redditor, and have all the text, graphics, etc, but no access to InDesign i'll help you with the whole layout process and export you a PDF. I'm not a graphics designer, and am not generally offering my services. Trust me, you'd not want to utilize me in the graphics design domain! :)

73 de k9daj / ae

For the past four months, I've been studying to obtain amateur radio operators licenses at varying class levels. This past Saturday, 1 June 2013, I obtained the highest level license, the Amateur Extra. I can now legally operate on all frequencies allocated to the Amateur Service.

On 10 March 2013 I passed the Technician level test. A few weeks prior, KB9JHU lead a "ham-cram", an all day, learn all the material in a sitting event. I learned quite a bit, and followed up this instruction with practice test drilling at HamStudy.org. I've had good friends involved in ham radio going all the way back to the late 90s, (my teenage years), but was too entranced with the BBS and computing scene that it never quite captivated my interest. My neighbor at the time, WA9ALY (SK), spent quite a bit of time evangelizing ham radio while we conquered various computing hardware and software issues. I really wish I'd known more about HF and packet at the time, as it would have been quite impressive to blast data without infrastructure thousands of miles away. Despite my lack of uptake at the time, the seed was planted and my fate was sealed. Thanks to my "XYL", I entered into an amateur radio family. AB9D has been urging me for the past 5 years to get my license, and with this post I wanted to share my experiences and methods of traversing the Technician, General, and Extra class exam material.

6 April 2013 marked the passing of the General exam. The extra followed 2 months later, on 1 June.

Obsessive drilling with hamstudy.org is the primary method I used to progress through the licensing levels. If you're currently setting at either Technician or General levels, becoming a /AG or /AE is only 2 weeks away if you're steadfast and dedicated.

In my estimations, 1 hour per day for 2 weeks is all it takes to move to the next level if you're focused for that 1 hour. I probably spent a bit more time, due to my slightly less-than-disciplined study attitude, but if you are ready, you'll pass without issues.

The key, in my opinion, is setting a deadline. Let others know you're going to become a "General" or "Extra" on XYZ date. The social pressure of being "a failure" may be of assistance depending on your personality type. Personally, I like deadlines, be they fake or real. In all honesty, there is really no failure. Take the tests 500 times if that's what it takes.

If you're consistently passing the practice tests, and you've taken at least 5 of them, then you're almost certainly going to pass the test when you take it during the next testing session.

The great thing about Hamstudy.org is that the guys there are dedicated and constantly improving the site. Email them if you don't like something or think something should be different. Chances are you'll get a response, and your request will be scheduled for implementation.

Immutant and AWS: HornetQ Node Id Issue

This is just a quick note about a possible JBoss / HornetQ issue that may crop up if you're building AMIs from a known working setup, and plan to use that AMI to fire up multiple worker nodes in cluster.

I've noticed this forever repeating message in my logs on the n>1 nodes in my test clusters:

1
2
19:29:52,412 WARN [org.hornetq.core.client] (hornetq-discovery-group-thread-dg-group2) HQ212050: There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=8579c60f-cc7b-11e2-8e0f-cd3567e135ae
19:29:52,413 WARN  [org.hornetq.core.client] (hornetq-discovery-group-thread-f193984e-cc83-11e2-b66d-cf94e9e2fc58) HQ212050: There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=8579c60f-cc7b-11e2-8e0f-cd3567e135ae

I believe the problem is that I had just cloned a working/set-up machine which left behind some Id files that are not being updated when the new AMIs come on line.

My solution to the problem was to remove all the files from the JBoss AS7 data directory:

1
rm -rf ~/.immutant/current/jboss/standalone/data/*

I'm sure there is a smaller subset of files you can remove - probably just the HornetQ files (the messagingbindings and messagingjournal directories). The directory listing should look something like this (before you remove them of course!):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ tree
.
├── content
├── infinispan
│   └── web
│       ├── default-host
│       │   └── ROOT
│       └── repl
├── messagingbindings
│   ├── hornetq-bindings-1.bindings
│   ├── hornetq-bindings-2.bindings
│   ├── hornetq-jms-1.jms
│   └── hornetq-jms-2.jms
├── messagingjournal
│   ├── hornetq-data-1.hq
│   ├── hornetq-data-2.hq
│   └── server.lock
├── messaginglargemessages
└── tx-object-store
    └── ShadowNoFileLockStore
        └── defaultStore

Creating the AMI after removing these files and then bringing them up seems to get rid of the node id issue when creating multiple instances from this newly created AMI.

Disclaimer: I'm far from a JBoss / HornetQ expert, so feel free to provide superior/alternate solutions!

AWS, Immutant, Torquebox, and Clustering - Part 1

note: these writeups assume you've got a working Clojure/JVM environment, know how to use the basics of AWS (ec2, AMI, S3), and have played with single-node Immutant

Amazon Web Services + Immutant and Torquebox

Immutant and Torquebox are completely awesome, as is the community support and responsiveness on the Freenode IRC channel #immutant. After lots of feedback, suggestions, and help, it seemed like it would be worthwhile to document the setup procedure for this stack running clustered on AWS.

Out of the box, Immutant is configured to use multicast for node discovery, AWS does not support this. I wanted a setup that would allow me to dynamically fire up arbitrary worker nodes (immutant/torquebox) that would participate in the cluster and register with a fronting load balancing Apache / mod_cluster instance.

Elastic IPs are limited on AWS, so I wanted to use as few as possible. In my setup, load balancers get an elastic IP, as do the database master nodes. Immuntant/Torquebox nodes are created from AMIs I've built in advance, and use whatever address AWS assigns. These AMIs dynamically pull configuration from my git repositories and set them selves up during boot time. I should probably look at Pallet, but I've just not had enough time. I ended up cobbling together shell scripts that create and destroy nodes based on the AMI id.

My Needs/Setup (Overview)

  • AWS instance running Apache/mod_cluster, with an AWS Elastic IP
  • An arbitrary number of Immutant/TorqueBox AWS instances participating in a cluster
  • A MongoDB instance with an Elastic IP.
  • Lots of configuration checked into DVCS (git)

Immutant and AWS

I'll start with the most fun and important part first. I'll give my configurations for Apache/mod_cluster in a follow up post. Most people will probably be interested in the Immutant on AWS cluster configuration above all else.

Immutant has excellent tutorials and instructions, so if you're completely new to Immutant, you'll want to check those out first.

To get this working, you'll be editing some XML. Sounds like fun right? It's not really that bad, but it's XML nonetheless.

standalone-ha.xml

standalone-ha.xml is where most of the changes to the stock configuration will be made. First we'll install Immutant from the command line, if you've not done so already:

1
2
# assumes leiningen, and the lein-immutant plugin 
lein immutant install

From here we can take a look at the standalone-ha.xml file located in the ~/.lein/immutant/current/jboss/standalone/configuration/ directory. As part of my deployment process I have this file copied to the appropriate directory when my AMI comes up. I keep a copy of standalone-ha.xml in a git repo alongside other configurations and dev-ops type scripts. You'll have to do something similar if you want to have the ability to arbitrarily bring up and shut down members of your cluster.

Based on the suggestion here of not binding TorqueBox's public interface to 0.0.0.0, I altered the <interfaces> tag and changed the public interface sub-element from this:

1
2
3
<interface name="public">
  <inet-address value="${jboss.bind.address:127.0.0.1}"/>
</interface>

to this:

1
2
3
<interface name="public">
  <nic name="eth0"/>
</interface>

On my AMIs, eth0 is the internal AWS IP address. Your instances should have security group settings that allow UDP and TCP communications as well, but I'll get to that in a follow up post.

Next, we need to configure JGroups to use some method of TCP communications for broadcast and discovery. The default should look something like this:

We need to change the default-stack to tcp, and then modify the TCP stack. I removed the UDP configuration completely, but you can leave it alone if you want.

MPING will not work on AWS, but thankfully there is TCPPING and S3_PING. S3_PING is ultimately what you'll want to set up if you want to be able to add and remove nodes from your cluster without touching the configuration, but TCPPING is easier to setup and verify, so I'll cover that first. For more JGroups info, check the protocol list.

You'll want to replace the above JGroups configuration with the configuration below:

Of course, change ip.address.node.1 to the address bound to eth0 on your first cluster node, and ip.address.node.2 to the address bound to eth0 on the second.

Finally, we need to tell HornetQ to use our JGroups TCP configuration instead of the UDP (which is the default).

We'll be looking at the subsystem:

1
<subsystem xmlns="urn:jboss:domain:messaging:1.3">

In this subsystem, we need to change:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<broadcast-groups>
    <broadcast-group name="bg-group1">
        <jgroups-stack>${msg.jgroups.stack:udp}</jgroups-stack>
        <jgroups-channel>${msg.jgroups.channel:hq-cluster}</jgroups-channel>
        <broadcast-period>5000</broadcast-period>
        <connector-ref>netty</connector-ref>
    </broadcast-group>
</broadcast-groups>
<discovery-groups>
    <discovery-group name="dg-group1">
        <jgroups-stack>${msg.jgroups.stack:udp}</jgroups-stack>
        <jgroups-channel>${msg.jgroups.channel:hq-cluster}</jgroups-channel>
        <refresh-timeout>10000</refresh-timeout>
    </discovery-group>
</discovery-groups>

To:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<broadcast-groups>
  <broadcast-group name="bg-group1">
    <jgroups-stack>${jgroups.stack:tcp}</jgroups-stack>
    <jgroups-channel>${jgroups.channel:hq-cluster}</jgroups-channel>
    <broadcast-period>2000</broadcast-period>
    <connector-ref>netty</connector-ref>
  </broadcast-group>
</broadcast-groups>
<discovery-groups>
  <discovery-group name="dg-group1">
    <jgroups-stack>${jgroups.stack:tcp}</jgroups-stack>
    <jgroups-channel>${jgroups.channel:hq-cluster}</jgroups-channel>
    <refresh-timeout>10000</refresh-timeout>
  </discovery-group>
</discovery-groups>

Test It, Round 1

At this point if you were to fire up Immutant on each of the nodes that you configured in the initial_hosts setting of the TCPPING JGroups configuration, you should see a message in the log file indicating that one node became the Master, and the other not. You'll also see a cluster count message:

1
2
3
4
18:42:57,478 INFO  [org.jboss.as.clustering] (MSC service thread 1-1) JBAS010238: Number of cluster members: 2
18:42:57,479 INFO  [org.projectodd.polyglot.hasingleton] (MSC service thread 1-1) inquire if we should be master (testapp.clj-hasingleton-global)
18:42:57,480 INFO  [org.projectodd.polyglot.hasingleton] (MSC service thread 1-1) Ensuring NOT HASingleton master (testapp.clj-hasingleton-global)
18:42:57,480 INFO  [org.projectodd.polyglot.hasingleton] (MSC service thread 1-1) Started HASingletonCoordinator

Above we see the log output of our non-master node. You can find the full logs in the ~/.lein/immutant/current/jboss/standalone/log directory.

S3_PING, standalone-ha.xml

TCPPING is great to at least verify your AWS settings are correct and that JGroups is working properly. I spent a lot of time with a non-working S3_PING configuration that really didn't report any errors, and was not seeing any cluster communications. After many helpful suggestions from #immutant, I cranked up logging levels, and eventually started iterating through possible problems. Eventually I got it working.

S3_PING is great if you want a dynamic AWS environment. No hard-coding IP addresses at all. You just configure an S3 bucket, and get the AWS keys for the user who has read/write/list privilege to that bucket. Since IP addresses can change on AWS, you're really just asking for trouble if you rely on them. You could of course use Elastic IPs, but you do not have an unlimited number.

The change is quite simple. In the JGroups subsystem, replace:

1
2
3
4
<protocol type="TCPPING">
  <property name="timeout">30000</property>
  <property name="initial_hosts">ip.address.node.1[7600],ip.address.node.2[7600]</property>
</protocol>

With:

1
2
3
4
5
<protocol type="S3_PING">
  <property name="secret_access_key">TOPSYKRETS</property>
  <property name="access_key">TOPSYKRETS</property>
  <property name="location">some.s3.bucket.name</property>
</protocol>

I'm pretty sure you have to make the S3 bucket before using the configuration, so if you see any strange stuff in the logs, double check your permissions.

Everything should work as before, when you had JGroups set to use TCPPING. To test, you could create jobs scheduled to run on only one node of the cluster, send messages to queues and topics, and check the contents of your distributed caches on the nodes using nrepl.

Jim Crossley put up a really great Overlay Screencast a couple of months ago that demonstrate Ruby and Clojure apps interacting. If you've not played with message queues or polyglot systems, this is a good place to get started.

In the next AWS / Immutant post, I'll provide details on my mod_cluster / Apache configuration for load balancing. I'll also talk a bit about my Rails / Clojure interaction, and how my AMIs are configured to pull configuration from GitHub.