Trending Feed

Decision at Stream about Babel, styled-components, Expo, JavaScript, Chat by Stream, React Native, Stream

Avatar of vishalnarkhede
Javascript Developer at getStream.io ·

Recently, the team at Stream published a React Native SDK for our new Chat by Stream product. React Native brings the power of JavaScript to the world of mobile development, making it easy to develop apps for multiple platforms. We decided to publish two different endpoints for the SDK – Expo and React Native (non-expo), to avoid the hurdle and setup of using the Expo library in React Native only projects on the consumer side.

The capability of style customization is one a large deal breaker for frontend SDKs. To solve this, we decided to use styled-components in our SDK, which makes it easy to add support for themes on top of our existing components. This practice reduces the maintenance effort for stylings of custom components and keeps the overall codebase clean.

For module bundling, we decided to go with Rollup.js instead of Webpack due to its simplicity and performance in the area of library/module providers. We are using Babel for transpiling code, enabling our team to use JavaScript's next-generation features. Additionally, we are using the React Styleguidist component documentation, which makes documenting the React Native code a breeze.

19 upvotes·1 comment·11.5K views
When designing and developing an app or game, at some point you may ask yourself if you want to monetize it. If you choose to do so by selling products via Google Play, you will most likely have a store screen that shows available items for sale, and use the Google Play Billing Library to display dialogs …

Decision about GitHub, Scala, Ruby, TypeScript, Node.js, Python, Visual Studio Code

Avatar of mbnshtck
Principal Software Architect at Microsoft ·

I use Visual Studio Code because its the best IDE for my open source projects using Python, Node.js, TypeScript, Ruby and Scala. Extension exist for everything, great integration with GitHub. It makes development easy and fun.

2 upvotes·4.4K views

Decision at Rent the Runway about styled-components, PostCSS, Sass

Avatar of hcatlin
VP of Engineering at Rent The Runway ·

We use Sass because I invented it! No, that's not a joke at all! Well, let me explain. So, we used Sass before I started at Rent the Runway because it's the de-facto industry standard for pre-compiled and pre-processed CSS. We do also use PostCSS for stuff like vendor prefixing and various transformations, but Sass (specifically SCSS) is the main developer-focused language for describing our styling. Some internal apps use styled-components and @Aphrodite, but our main website is allllll Sassy. Oh, but the non-joking part is the inventing part. /shrug

4 upvotes·26.3K views
One of the very first things a developer must do before making their first API call to DocuSign is to obtain an integration key. We recently updated the API & Keys page within DocuSign Admin,...Read More

Decision at Intuit about Git, Karate DSL

Avatar of ptrthomas
Distinguished Engineer at Intuit ·

Karate DSL is extremely effective in those situations where you have a microservice still in development, but the "consumer" web-UI dev team needs to make progress. Just create a mock definition (feature) file, and since it is plain-text - it can easily be shared across teams via Git. Since Karate has a binary stand-alone executable, even teams that are not familiar with Java can use it to stand-up mock services. And the best part is that the mock serves as a "contract" - which the server-side team can use to practice test-driven development.

16 upvotes·2 comments·12.2K views

Decision at Atlassian about Azure Pipelines, jFrog, Octopus Deploy, AWS CodePipeline, CircleCI, Bitbucket, Jira

Avatar of ojburn
Architect at Atlassian ·

We recently added new APIs to Jira to associate information about Builds and Deployments to Jira issues.

The new APIs were developed using a spec-first API approach for speed and sanity. The details of this approach are described in this blog post, and we relied on using Swagger and associated tools like Swagger UI.

A new service was created for managing the data. It provides a REST API for external use, and an internal API based on GraphQL. The service is built using Kotlin for increased developer productivity and happiness, and the Spring-Boot framework. PostgreSQL was chosen for the persistence layer, as we have non-trivial requirements that cannot be easily implemented on top of a key-value store.

The front-end has been built using React and querying the back-end service using an internal GraphQL API. We have plans of providing a public GraphQL API in the future.

New Jira Integrations: Bitbucket CircleCI AWS CodePipeline Octopus Deploy jFrog Azure Pipelines

12 upvotes·24.9K views

Decision at Stitch Fix about Amazon EC2 Container Service, Docker, PyTorch, R, Python, Presto, Apache Spark, Amazon S3, PostgreSQL, Kafka, Data, DataStack, DataScience, ML, Etl, AWS

Avatar of ecolson
Chief Algorithms Officer at Stitch Fix ·

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

19 upvotes·65.5K views

Decision at Rainforest QA about Terraform, Helm, Google Cloud Build, CircleCI, Redis, Google Cloud Memorystore, PostgreSQL, Google Cloud SQL for PostgreSQL, Google Kubernetes Engine, Kubernetes, Heroku

Avatar of shosti
Senior Architect at Rainforest QA ·

We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).

We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .

Read the blog post to go more in depth.

12 upvotes·43K views
In the corner, there’s some code.It’s tired of being called names. It’s tired of being last-picked. After all, this code runs a business. But, for all the transactions it processes, value it enables, and users it helps, it’s made fun of.This code gets sad. “I am the backbone of the business!” it shouts …

Decision at Gap about Visual Studio Code

Avatar of deepakk
Sr. DevOps Engineer ·
at

I use Visual Studio Code because of community support and popularity it gained in very short period and many more extensions that are being contributed by community every day. I like the Python Engine in VSCode makes my work life productive. My most fav extensions are

  • Gitlense
  • Kubernetes
  • Docker
  • Chef

Themes are always fun and make your development IDE productive especially with colors and error indicators etc..

7 upvotes·9K views

Decision at SparkPost about GitHub, AWS Lambda, Amazon EC2 Container Service, AWS CodeDeploy, AWS CodeBuild

Avatar of cristoirmac
VP, Engineering at SparkPost ·

The recent move of our CI/CD tooling to AWS CodeBuild / AWS CodeDeploy (with GitHub ) as well as moving to Amazon EC2 Container Service / AWS Lambda for our deployment architecture for most of our services has helped us significantly reduce our deployment times while improving both feature velocity and overall reliability. In one extreme case, we got one service down from 90 minutes to a very reasonable 15 minutes. Container-based build and deployments have made so many things simpler and easier and the integration between the tools has been helpful. There is still some work to do on our service mesh & API proxy approach to further simplify our environment.

9 upvotes·2 comments·14.1K views

Decision at Segment about Swagger UI, ReadMe.io, Markdown, Postman, QA, Api, Documentation

Avatar of nzoschke
Engineering Manager at Segment ·

We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. A public API is only as good as its #documentation. For the API reference doc we are using Postman.

Postman is an “API development environment”. You download the desktop app, and build API requests by URL and payload. Over time you can build up a set of requests and organize them into a “Postman Collection”. You can generalize a collection with “collection variables”. This allows you to parameterize things like username, password and workspace_name so a user can fill their own values in before making an API call. This makes it possible to use Postman for one-off API tasks instead of writing code.

Then you can add Markdown content to the entire collection, a folder of related methods, and/or every API method to explain how the APIs work. You can publish a collection and easily share it with a URL.

This turns Postman from a personal #API utility to full-blown public interactive API documentation. The result is a great looking web page with all the API calls, docs and sample requests and responses in one place. Check out the results here.

Postman’s powers don’t end here. You can automate Postman with “test scripts” and have it periodically run a collection scripts as “monitors”. We now have #QA around all the APIs in public docs to make sure they are always correct

Along the way we tried other techniques for documenting APIs like ReadMe.io or Swagger UI. These required a lot of effort to customize.

Writing and maintaining a Postman collection takes some work, but the resulting documentation site, interactivity and API testing tools are well worth it.

29 upvotes·1 comment·72.2K views

Decision at UI licious about Go, npm, Node.js

Avatar of PicoCreator
CTO at Uilicious ·

Our CLI was originally written Node.js with npm , 2 years ago. We have now migrated to Go !

It was something we quickly hacked together at the early beginnings of Uilicious when our focus was to move fast and iterate the product quickly. We wanted to roll out the CLI ASAP, so that users with a CI/CD can hook up their tests to their front-end deployment pipeline.

However after 2 years, with NPM dependency hell pains - We decided to migrate our CLI toolchain to Go for

  • Zero deployment dependencies
  • Single file distribution (and backwards compatible with NPM)

Happy with how it is : article covers the decision in much deeper details

https://dev.to/uilicious/why-we-migrated-our-cli-from-nodejs-to-golang-1ol8

14 upvotes·28.7K views
We’re well on the way towards the release of Go 1.13, hopefully in early August of this year. This is the first release that will include concrete changes to the language (rather than just minor adjustments to the spec), after a longer moratorium on any such changes.

Decision at Zulip about Webpack

Avatar of tabbott
Founder at Zulip ·

We use Webpack because it's the standard toolchain for managing frontend dependencies in 2019, and it's hard to make a nice frontend development user experience without it.

I don't like it -- their configuration system is a mess, requiring a ton of reading or expertise to do things that essentially every project wants to do by default. It has a lot of great features, which is why we use it. But as an example, it's development server hot reloading is really cool, but doesn't handle changes in the webpack configuration file itself (so adding a new file requires a restart).

My hope is that the sheer fact that everyone is using it will eventually lead to these problems being fixed or it being replaced by a similar system with a better design.

5 upvotes·6.3K views

Decision at Stack Overflow about .NET

Avatar of NickCraver
Architecture Lead at Stack Overflow ·

We use .NET Core for our web socket servers, mail relays, and scheduling applications. Soon, it will power all of Stack Overflow. The ability to run on any platform, further extend and plug especially the ASP.NET bits and treat almost everything as a building block you can move around has been a huge win. We're headed towards an appliance model and with .NET Core we can finally put everything in box...on Linux. We can re-use more code, fit all our deployment scenarios both during the move and after, and also ditch a lot of performance workarounds we had to scale...they're in-box now.

And testing. The ability to fire up a web server and request and access both in a single method is an orders of magnitude improvement over ASP.NET 5. We're looking forward to tremendously improving our automated test coverage in places it's finally reasonable in both time and effort for devs to do so. In short: we're getting a lot more for the same dev time spent in .NET Core.

4 upvotes·1 comment·26.5K views