How SlidePay Built A Swiped Payments Service for Developers

Published November 24, 2013 21:59 | By Yonas Beshawred

Ae283db6a47e252d2d3e226a65fe9300

Editor's note: Joel Christner is Co-founder & CTO at SlidePay. Matt Rothstein is Founding Engineer at SlidePay.


SlidePay

SlidePay's mission is to make it easy for any developer to accept in-person payments through their app. They know how hard it is to build an app that allows you to accept credit cards in person because they've done it. The team at SlidePay started out by building a full point of sale application and credit card terminal for small businesses and merchants. They soon realized that most of their customers had very specific requirements based on the type of businesses they had. So they decided to allow anyone to build

their own app that accepts payments via credit card swipe without having to go through all the hassle that they did. We sat down with Joel and Matt to find out more about how they built their platform.


SlidePay's Tech Stack

Languages & Frameworks

C# Ruby Rails backbone.js Objective-C

Cloud Services

Windows Azure Heroku GitHub Travis CI Papertrail Code Climate Mailgun

Jump to the cloud services


LS: Let's talk about what you all do.

J: SlidePay is a cloud-hosted payments platform that allows developers to integrate functionality they could get from something like Square, directly into their own app. What we're finding is that the payments market is starting to verticalize. As an example, people in the nail salon industry or music merchandise industry want a very tailored solution. Horizontal solutions like Square are great, but they don't address vertical requirements very well. Developers want to build solutions for those spaces, and we enable them to do that.

We also have reference applications that we built using our own APIs that have been in various app stores for over a year. These applications are a point of sale and a credit card terminal, and are available on iOS devices (iPhone and iPad), Windows Phone, and Windows 8. These are horizontal applications that are similar to the other products on the market, and while many merchants use them, their primary purpose is to help developers get the experience of using our payments platform.

If you look at companies that are doing things like ordering over mobile devices, or loyalty, collecting donations at a fundraiser, or the nail salon or music verticals we described, they're all forced into building a payments infrastructure because that's part of what their package has to do. I can't sell a nail salon point of sale to a nail salon owner unless I have a way for them to accept credit cards. So if I was the developer for that app, I'd be forced into building all that payments infrastructure that nobody wants to build. It's difficult to build and difficult to get right. It's not impossible, good development teams can do it. What we've done is not rocket science, it's just takes a lot of time and effort.

We've taken the best practices for building our own applications and made it available to the world.

LS: Awesome. Let's get into your tech stack. You guys have a lot of moving parts, platforms to support, etc.

J: We use Windows Azure as our infrastructure cloud. It's a PCI compliant platform and our software is compliant as well. We started on Amazon EC2. We had a sneaking suspicion that they were oversubscribing their hypervisors so we migrated, but again that's just a suspicion, I don't have any proof. There were just some random points in the day where our VMs would be at 50% memory utilization and start paging like crazy. Paging memory to disk, so things that should have been done in RAM were being done on disk so response times became inordinately large and it created an avalanche affect. So we moved to Azure. I'm not sure if it's that they don't oversubscribe or they're just underutilized or their architecture is more stable, whatever the case may be. We're happy with Azure and have instances in multiple regions. By and large, the cloud has been more reliable than any datacenter we could ever use as a colocation facility. If we were to use a colocation facility, actually it would be two facilities, for each we would need two network carriers, redundant routers, redundant firewalls behind those routers, redundant switches, servers, and then local clustering, geoclustering, and data replication. The only disadvantage I see about being in the cloud, is that I can't manage data protection the way that I want to.

Our code is all C# on the backend, using IIS and Microsoft SQL Server. That's for the actual backend API endpoint that developers connect to. There are many instances that we're running: database servers, logic servers, request routers, all of those are using the same codebase and platform (C#, IIS, SQL server). We have another layer that sits directly in front of that and that's our SDKs. We have SDKs for just about everything under the sun: Windows 8, Windows Phone 8, iOS, Android, Node, Python, Ruby, and PHP. And the nice thing about that code is that, from a developer's perspective, it's very easy to include that code in their own codebase and start processing payments quickly. Most of the smaller teams that are just launching new products have been able to get up and running in about a week. And that's fully tested. We have some that have gotten it up and running in a day. So it really depends on to what degree you want to integrate. It's very flexible, so if you want everything to just show up in your customer's bank account, that could happen very quickly. But if you want to do things like control the payouts and such, so that you could send some of the payment volume over to one bank account and some of it over to another one; there's various features like that that we do provide, that extend the development cycle. All in all it's pretty quick though.

So backend, SDKs, and in front of that we have the mobile and web properties. The reference mobile apps and then adjacent to that are the web properties for things like managing accounts and administrative functions.

LS: Cool. Can you talk a bit about your build process?

J: We use git as a repository. But I don't enforce anything on anyone in terms of process. I set some high level objectives and goals and occasionally interrupt with tactical demands for things that need to get done. But in terms of a build process, or automated testing, harnesses and such, I don't get involved in any of that for anybody other than myself. When I look at Matt, he's going to understand web development infinitely better than me. And what I do on the backend, while I know that I'm replaceable and that anyone here can do what I'm doing, I'm going to know that system better than anybody else. So similar to how I don't impose anything on anyone else, I expect they're not going to do that to me as well. So we operate as a bunch of independent silos that only come together from an objective perspective.

We use GitHub for retaining code, some of us use branches, some of us don't. It really comes down to the developer. That's one of the nice things about SlidePay. Every developer here has control over their own destiny. We all have requirements for each other, enhancements, etc that we may plan and design together but as far as having a structure for how we do builds and automated testing, etc, it's completely up to the developer.

LS: And part of that is because you're all on different platforms right? There's the API backend, mobile, then web. So you all actually have separate codebases.

J: Right. But it's also because we're all one-man teams for our platforms. I handle backend, Matt handles web, and Alex handles mobile. So once we all start growing our individual teams and the collective team, there will probably be more formal processes and structure around some of this.

LS: Gotcha. So what's your process specifically?

J: For me, I have many different test clients. My middle name is "try-catch." If you look at my code, it's surprising how fast it runs because there's such a high degree of error checking and error catching. I have a "failure-first coding" mentality. I assume my code is going to "s**t the bed", so I'm looking for it to "s**t" the bed at all times. And in so doing I'm finding those problems and propagating them up to the appropriate layer in the stack to ensure I handle it appropriately and keep any failure contained. Thankfully since it is a RESTful service, there is one entry point and one exit point.

As far as the build process, I let Visual Studio do it all for me, mostly because I'm lazy in that regard. I have various configurations based on the server that the code is getting deployed to and part of the build and compile, etc is to make sure the right config files are in place. And prior to actually publishing (I use sftp), the build will make sure the right files are in place based on the configuration and then push the right code to the right system. I typically test locally, push it to a dev environment, which is usually a bit ahead of prod in terms of the codebase.

LS: Cool. So that covers the API backend. Let's talk about the web properties. What are the different web products?

M: The web properties include a management dashboard that constitutes a back office suite for the reference apps (Cube POS, Cube Card Terminal) and an internal administrative dashboard for internal functions. Through our SlidePay products, we have six different SDKs that I manage. With the Cube products, we run all of that stuff on Heroku.

My process consists of developing locally with as close to a mimicked system as I can get on my machine. So I'll test locally with our development endpoint and then our production endpoint, then I'll push it up to a development-facing Heroku application, then a production-facing Heroku application with slightly different compilation/configuration settings. And then also to our other live production system. And when I say prod, I either mean our live customer-facing site or our live customer-facing endpoint. So we have three Heroku endpoints that allow us to mimic live production conditions and get progressively closer to live production.

And so the management dashboard is a ton of client side javascript on top of a Rails app. The internal admin application is also a mix of javascript and Rails. We use the SlidePay Ruby SDK for both products on the backend to work with our API. Then we've got a number of SDKs on GitHub. For that I use more standardized testing techniques and continuous integration. RSPEC for Ruby, probably start using Cucumber soon.

For all the SDKs, I have unit testing. And I integrate with Travis CI. Every time I publish to GitHub, Travis sees it and runs my test suite and then there's a badge on the GitHub repo. Travis is pretty fantastic, they have almost universal coverage. If you've got an open source project, they don't even charge you.

There's another service I use called Code Climate, which evaluates your code. I use that on the Ruby SDK. They just recently added javascript but I haven't integrated into that yet.

For error monitoring, I actually rolled my own solution. I found that a lot of the error solutions out there were almost suitable but not great. Part of what we really need to look at is the information going back and forth, and I wasn't willing to give that information out. And most of the time, they didn't actually have a place for me to put that information (request bodies and response bodies). Some of that stuff I can't give out because it would constitute a PCI compliance issue. And in other cases, it might just be personally identifiable and I don't want to do that. So anyways, I actually receive an error email for every single error that occurs on our client side web products. Which thankfully isn't that bad because we don't get a ton of errors. We send those through Mailgun.

I use GitHub Issues for managing what needs to be implemented. I wrote a Gmail plugin to automatically look at the error emails that come in and give me some additional information on who caused the error and what their current status is, etc.

On the Heroku side things are a little different. They make it really easy to use solutions like Logentries or Loggly. I've actually settled in with Papertrail. I've tried a few solutions and Papertrail seems to be the most unobtrusive. The balance of intrusion and functionality is best with Papertrail, and price isn't bad. And they have pretty robust search functionality, and customer service has been excellent.

J: I rolled my own performance monitoring. My use case is very database query-centric. So I record every API call, it's response time is measured and logged. For database queries, I actually kick off a thread after every db query and so that I can go store the response time of that exact query so that I can then do analytics over the queries that I'm submitting to figure out where I can optimize.

LS: Any other tools or frameworks, etc that have been key to your workflow that we didn't touch on yet?

M: Charles Proxy. It's a desktop application that let's you run all your requests through. Even if you're interacting with a secure endpoint, you can see what data you're sending and getting back to make sure that all your requests are well formed.

In terms of frameworks, I've done a lot of development in backbone.js. So as that plugin ecosystem has gotten richer, it's been easier to develop more quickly. And I've produced some of my own as well.

LS: So what are some of the in-person payments-specific challenges you faced from a technology perspective? Because you have to deal with a whole bunch of problems on top of the normal payments PCI compliance etc, but then there's things like coming up with a reliable swiping algorithm, etc.

J: Well first, the biggest headache with the payments space in general is that, anytime you move money you are at the core of a business. A business is in business to make money. Anytime you're a part of that business' architecture, at its heart, if there's a problem with you, there's problem with the business. If things are slow, if things are not working, it creates a very problematic situation. People expect things to just work, and they should. They have to. So it's this super mission-critical service that you have high expectations for and as a provider we have to make sure we meet those expectations.

And when you're doing in-person payments for individual merchants and businesses, they expectations are still just as high except you have all these other challenges on top of it. So a good example is one of our development partners in the merchandise space. They connected us with a large music production and we're on tour with a number of their artists. Some of the larger shows have lines of people at the merchandise booths that can literally be tens of people deep and for that kind of a merchant the difference between 15 seconds versus three seconds to process a transaction is material to their business. The vast majority of our transactions from a backend perspective happen in under 3 seconds - meaning we connect to the processor, the request is marshaled over the Visa or MasterCard network, all the way to the issuing bank, and a response makes its way comes back to us in about two and half seconds. From our perspective things look great. But from the customer side if there happens to be any delay in the cellular network because you're at a concert and you have 20,000 people and 5,000 of them have AT&T LTE, another 5,000 are Verizon, you're on congested towers so the response time is a lot slower. So it might take 30 seconds on the device. Even thought its 2.5 seconds from us back. That's a problem for them. So we have this situation where people expect it to always be fast and working. So to sum it up, I would say this vertical is much less forgiving than others. Because if the service fails you either lose customers fast or you lose money.

Also, for in-person payments there's data that you can't store including magnetic stripe data, CVV, and other requirements. Those are actually really easy - the more difficult part is that behind the scenes this whole thing is the epitome of a distributed system. There's the card, the card came from an issuing bank that's connected to all the acquirers (VISA, etc), which our processor is connected to, our backend has to connect to that processor, our iPad has to connect to that backend, the iPad app has to be able to read data from an SDK from a third party vendor that manufactures the card reader and that's going to create data that only our backend can decrypt. So we have, already five different layers right there. When you add in ancillary pieces like logging, email notifications, performance monitoring, administrative layers, load balancing - you end up with a massively distributed system.

LS: So right now, when developers sign up for SlidePay, you give them a reader.

J: We actually support many card readers if developers want to use a different one. We support a basic audio reader plugged into the audio jack. We support a reader from AnywhereCommerce (an encrypted audio port reader), the suite of MagTek readers (lighting, 30-pin, or audio port), USB card readers with keyboard emulation.

Ae283db6a47e252d2d3e226a65fe9300
SlidePay

Share your stack

Share your stack