broccoli code

An antithesis of spaghetti.

Nullability is a hard problem. This is especially true for JavaScript/TypeScript. Front-ends and Web Servers (which makes up most of JavaScript usage) tend to work with a lot of JSON. And although it's relatively easy to do JSON.parse(myJSONString), it's a whole different game when you want to safely traverse the resulting JavaScript object.

And since TypeScript offers type-safely to those who traverse these objects properly, we have to be mindful of preserving the type guarantees and not circumvent the type system needlessly.

Let's start without deeply nested nullable interface:

interface IUser {
  id: string;
  name: string;
  addresses?: Array<{
    street: string;
    suburb: string;
    postcode: string;
    country: string;
    mail?: Array<{
      subject: string;
      sentOn: string;
      body: string;
    }>;
  }>;
}

Since the address could be null:

const user: IUser = { id: "1", name: "foo" };
const firstAddress = user.addresses && user.addresses[0]; // throws an error in javascript, TypeError in TypeScript.

Option 0: A naive approach to traversing

If you use && to “chain” your accessors, You can get around this problem a little bit:

const firstAddressStreet =
  user.addresses && user.addresses[0] && user.addresses[0].street;

We can quickly see how this approach makes things very hard to “grok” in your head. If you need to extend this beyond a couple of levels, you end up with:

const firstMailSentOn =
  user.addresses &&
  user.addresses[0] &&
  user.addresses[0].mail &&
  user.addresses[0].mail[0] &&
  user.addresses[0].mail[0].sentOn;

Option 1: Enter idx

I have found idx to be a safe method of accessing these nested nullable values. A naive implementation of this would be:

const idx = <Input, Output>(input: Input, select: (input: Input) => Output) => {
  try {
    return select(input);
  } catch (e) {
    return null;
  }
};

Having this, you will be able select the previous property like this, in JavaScript:

const firstMailSentOn = idx(user, u => user.addresses[0].mail[0].sentOn);

This works because any operation which throws is rescued by our catch block which returns null.

But this won't work in TypeScript quite the same. No, TypeScript will complain that you are trying to select addresses, but addresses field is possibly null. Therefore, when using idx with TypeScript, you tend to use the non-null assertion (!) a lot.

const firstMailSentOn1 = idx(user, u => user.addresses[0].mail[0].sentOn); // TypeError:
const firstMailSentOn2 = idx(user, u => user.addresses![0]!.mail![0]!.sentOn); // silences type errors

As good of an idea as idx seems to be, it has a couple of downsides.

  1. A non-null assertion in TypeScript should be discouraged: Non-null assertion means – “I know better, compiler. Stop complaining”. Once you add this precedence to a codebase, you're giving up some guarantees that TypeScript gives you about your apps' behaviour. It's better to not introduce a bad behaviour even in a contained space like idx, to ensure long-term maintainability of code. See broken window theory.

  2. Idx can be abused: Once you introduce idx, you'll let your developers use it as a short-hand for anything which needs a try-catch surrounding it. In large codebases, this is all too common as policing code becomes harder.

const thisIsWrong = idx(obj, p =>
  myVeryUnsafeOperationWithSideEffects(p, otherVariable)
); // js-version with babel plugin will complain here though.

Option 2: Maybe: a monadic adventure

Without going into a lengthy discussion about what Functors, Applicatives and Monads are, let me make a proclamation that monads are great for the use case of traversing deeply nested null values because they let you “map over nullable values in small isolated contexts”. Let's dive into an example:

Maybe/Option types are prevalent in many functional programming languages and there are a number of JavaScript/TypeScript libraries that implement them. All of them have the same characteristics. For this example, we'll consider the excellent true-myth Maybe type.

import { Maybe } from "true-myth";

const firstMailSentOn = Maybe.fromNullable(user.addresses)
  .map(addresses => addresses[0])
  .map(firstAddress => firstAddress.mail)
  .map(allMail => allMail[0])
  .map(firstSentMail => firstSentMail.sentOn)
  .unwrapOr("never");

Maybe allows us to “box” a potentially nullable value, and map over it with a mapping function. The mapping function may return another nullable value. But the subsequent mapping function will only be hit if the previous mapping function resolves with a non-null value. In the previous example, if the .map(firstAddress => firstAddress.mail) returns null because firstAddress.mail is null, .map(allMail => allMail[0]) will not be invoked. It will instead short-circuit to .unwrapOr("never"), making firstMailSentOn === "never".

There are a few advantages of this approach, especially for TypeScript.

  1. We don't have to circumvent the type system in any way. We don't have to do non-null assertions since the nullable part of the type gets thrown out at each map. The reason for this is each map only invoked if the previous step returns a non-null value.
  2. If your types are correct, this will be extremely safe. (Doesn't throw)
  3. You can elegantly supply a default value (null in our case – via unwrapOr), thereby forcing the developer to consider the alternative scenario – when a nullable value is encountered.

One might look at this and question the readability of this approach. It's subjective – but I'm sympathetic to anyone who says a selection that looks like a.b.c.d looks much simpler than Maybe.fromNullable(a).map(p => p.b)....

Option 3: Optional chaining

This is one of those things I'm most excited for in JavaScript/TypeScript. At the time of writing this article, this is still very much a TC39 proposal. Optional chaining tries to solve this problem by introducing a new syntax:

const firstMailSentOn = user.addresses?.[0]?.mail?[0]?.sentOn;

If you're familiar with Ruby, this will look a lot like the safe navigation operator (foo&.bar).

Essentially it safely navigates the chain, stopping whenever it sees a null/undefined value so as not to throw and crash the program. Although I can appreciate the monadic functional method of traversing nested values, the signal-to-noise ratio of this new syntax is relatively higher. In that vein, I'm very excited about its arrival in TypeScript 3.7.

Can I use optional chaining today in TypeScript?

Short answer – no. It's landing in TypeScript 3.7, and there are no experimental... flags to support it as of yet.

However, if you must, you can use ts-optchain to do something similar. There will be compiler transformation you'll have to configure. And in the future, if you do end up using official optional chaining syntax, you'll have to do a code migration as well.

As someone who jumped on the Node.js train early on and wrestled with the callback hell, I quite liked promises. I still do, but more in an “it's the best that we've got way”. So many times I've forgotten to catch a promise chain and it always decided to silently fail. Or, I'd have to hack my way through trying to cancel a promise chain.

I won't go into too much detail over all of the issues with promises. But I highly recommend Broken Promises which does an excellent job of summarizing everything.

Fluture

As an alternative to promises, I've been using fluture for some time now. I highly recommend the fluture-js library for this. It introduces itself as a “Fantasy Land compliant (monadic) alternative to Promises”. If you're unfamiliar with monads of fantasy-land specification, don't worry about it.

Douglas Crockford once said, “Once someone understands Monads, they lose the ability to explain it to anybody else”. Despite that, let me give a shot at explaining what a monad is, in Javascript terms.

In Javascript world, a monad tends to be an object that's got a bunch of functions as properties. When you invoke these functions, it tends to do something and return something like the original Javascript object. You can then use that returning object to do further computations. This might sound like a promise, but promises are not monads for reasons which we'll not get into here.

Example

Let's consider a simple futures example (using fluture-js) where we:

  1. Read and parses a package.json file to get the name of a package
  2. Send that data to a fictional API that returns some metadata about it
  3. Parse that result to get the downloads count
  4. Sends that count to an onSuccess function
import { encaseP, node, encase } from "fluture";

const readFileF = filePath => node(done => fs.readFile(filePath, "utf8", done));

const fetchF = url => encaseP(url => fetch(url));

const jsonParseF = encase(jsonString => JSON.parse(jsonString));

const getPackageDownloads = (npmPackageName, onSuccess, onFailure) => {
  readFileF("package.json")
    .pipe(chain(jsonParseF)) // shorthand for chain(jsonString => jsonParseF(jsonString))
    .pipe(map(package => package.name))
    .pipe(
      chain(packageName => fetchF(`/api/v1/package-metadata/${packageName}`))
    )
    .pipe(map(jsonParseF))
    .pipe(map(response => response.data.metadata.downloadCount))
    .pipe(
      fork(error => onFailure(error), downloadCount => onSuccess(downloadCount))
    );
};

Explanation

I'd like to believe that you can elicit 80% of the value that fluture gives by knowing 20% of the constructs it provides. And this might well be that 20%. Let's go through each construct of the earlier example in detail and see what it does.

0. Common to all constructs...

...is the fact that everything returns a future. Therefore, we can compose these futures, refactor them out, etc.

1. encaseP, node, encase creates futures

Since javascript doesn't have futures, we need to have some tools to convert existing Javascript constructs like promises, and functions to futures. And these 3 constructs does exactly that.

  1. encaseP creates a future from a promise
  2. node creates a future from a node-style async function
  3. encase creates a future from a plain old javascript function

Since all three of these Javascript constructs may fail through a promise rejection, a callback(error) or an exception, these three utilities map that rejection state to future rejection as well.

2. pipe lets you chain futures

Think of this like the pipe() that you get when you invert the compose() function. We talked about this in length at A practical guide to writing more functional JavaScript. It basically lets you “chain” futures.

3. map transforms values

When you have to pipe something, you always have to pipe a future-ed version of your result. When your computation doesn't produce a future, like package => package.name, you can do that transformation inside of map.

When you invoke map(fn), with a future, map takes the value inside that future, applies fn to transform that value, and returns the value wrapped in a future.

4. chain transforms values but expects a future back

chain does the same thing map doesn't, but the fn in your chain(fn) must return a future back.

5. fork to execute

Since futures are lazily evaluated, nothing is being done until you tell the future to execute things. fork(failureFn, successFn) takes a success function and a failure function and executes them on a success/failure instance.

Why use futures over promises?

There's a lot of advantages to using futures. Aesthetically pleasing API is a big part of it. Since promises are the real competition, let me try to make a few concrete distinctions against promises.

  1. Lazy evaluation has a lot of practical advantages. You have a guarantee that your computation will not execute at the time of creating the future. Whereas the new Promise(...) will execute at the time of the creation.

  2. Testability comes through lazy evaluation as well. Instead of mocking all of your side-effects when testing, you can assert whether the futures wrapping the side effects were “composed”. Thereby not executing the side-effects as well.

  3. Better control flow than promises for sure. You can race(), parallel()-ize, and cancel one or more promises out of the box.

  4. Error handling is far more superior. You will always end up with an error. No more forgotten catch()es silently suppressing errors. And you get a really good debugging experience out of the box.

In Practical guide to writing more functional Javascript, we walked through how to reason about our code in functional programming terms. In this guide, we will talk about a few utilities I like to use to reason about these concepts and help us navigate through the imperative constructs JavaScript natively provides.

Tread Lightly

I think that making imperative constructs in the language (if-else/try-catch) more declarative will improve the readability and testability of your code. It’s a strong opinion loosely held because I can sympathise with the hidden cost of it as well.

Right abstractions are hard because it forces you and your team to come to a consensus. And agreeing is hard — especially when you’re trying to rewrite simple language constructs as functional abstractions. It’s doubly hard because what code is more readable and what level to unit test things are very subjective as well.

Functional construct #1: conditionally

I argued for the need to have functional constructs in my earlier article. Let’s consider a basic language construct: if-else. What if we can express if-else as a functional construct?

The implementation goes like:

export const conditionally = (config) => (props) => {
  return config.if(props) ? 
    config.then(props) : config.else(props);
};

In straightforward terms, conditionally asks the question: can I write if-else in a way that in the truthy condition or false condition always returns a value by evaluating a function? Or in other words, if I could express what if-else does in a pure function, how does it look like?

It takes a config, that has three functions: if(), then() and else(). And it constructs a function which can receive your props argument.

When if(props) evaluates to true, it fires the then(props) or else, else(props). All three functions receive the same input and conventionally produce the same type of result.

If you use Typescript, we can enforce the input type and the result with generics. If the following looks complicated, or you don’t have experience with generics in Typescript, feel free to skip over the following example.

export const conditionally = <Props, Result>(options: {
  if: (props: Props) => any;
  then: (props: Props) => Result | Result;
  else: (props: Props) => Result | Result;
}) => (props: Props) => {
  return options.if(props) ? options.then(props) : options.else(props);
};

Let’s consider a normal if-else condition.

function getCarConfig(car) {
  let description;
  let newPrice;

  if (car.rating > 4) {
    description = "good car";
    newPrice = car.price + 1000 * car.rating;
  } else {
    description = "bad car";
    newPrice = car.price;
  }
  
  return {
    description,
    newPrice,
  }
}

The above example is an almost perfectly good way of writing this. But we can do better. Now let’s consider writing this with conditionally.

const hasGoodRating = rating => rating > 4;

const priceChange = conditionally({
  if: hasGoodRating,
  then: rating => 1000 * rating,
  else: () => 1000,
});

const getDescription = conditionally({
  if: hasGoodRating,
  then: () => "good car",
  else: () => "bad car",
});

function getCarCofig (car) {
  return {
    newPrice: priceChange(car.rating) + car.price,
    description: getDescription(car.rating)
  }
}

This might seem a bit verbose. But let’s analyse it a bit…

The different concerns are now handled by two different functions. conditionally has gently forced you to separate your concerns. This, in turn, gives you the option to test all these concerns in isolation, and conditionally mock them — adhering to most of the F.I.R.S.T principles of unit testing.

When someone else reads your code to understand what getCarConfig does, they don’t need to go to the implementation details of priceChange and getDescription, because you’ve named things properly. Your extractions now have a single responsibility and proper naming creates the least astonishment for a reader.

That IMHO is why I advocate embracing FP in Javascript. It forces you to break the problem into small atomic parts called functions. These functions:

  1. Separate your concerns
  2. Improves testability
  3. Naturally adheres to the Single Responsibility Principle
  4. With a bit of practice in naming things, the Principle of Least Astonishment is preserved

Functional construct #2: tryCatch

Exceptions are a powerful tool in a lot of languages. They provide a refuge from the unknown, unreasonable and unsafe boundaries of a system.

In Javascript, you can use try-catch:

function setUserLanguageCode(selectedLanguage) {
  const languageCode = getLanguageCode(selectedLanguage);
  
  let storedSuccessfully;
  
  try {
    window.localStorage.setItem("LANG_CODE", languageCode);
    storedSuccessfully = true;
  } catch (e) {
    storedSuccessfully = false;
  }
  
  return {
    storedSuccessfully
  }
}

But try-catch is a bit verbose. If you want to record the state (like storedSuccessfully there), you have to declare a let which signals a possible mutation of state, as with the example. Also semantically, try-catch signals a break in control flow and makes the code harder to read.

Let’s try to create a functional utility to mitigate some of those issues.

export function tryCatch({
  tryer,
  catcher
}) {
  return (props) => {
    try {
      return tryer(props);
    } catch (e) {
      return catcher(props, e.message);
    }
  };
}

Here, we encapsulate the try-catch construct in a function. tryCatch() will receive a config object with two functions. It then returns a function which will accept a single props object.

  1. tryer(props) will be evaluated, and return the result.
  2. While doing tryer(props), if an exception occurs, catcher(props) will be called.

Again, with Typescript, you can use generics to enforce the input types and the output types of this construct. If the generics here look a bit daunting, I’ve written a beginner’s intro to generics and why you should use them here.

export function tryCatch<Props, Result>({
  tryer,
  catcher
}: {
  tryer: (props: Props) => Result;
  catcher: (props: Props, message: string) => Result;
}) {
  return (props: Props) => {
    try {
      return tryer(props);
    } catch (e) {
      return catcher(props, e.message);
    }
  };
}

With that in mind, let’ try to refactor our earlier example.

const storeLanguageCode = tryCatch({
  tryer: (languageCode) => {
    window.localStorage.setItem("LANG_CODE", languageCode);
    return true;
  },
  catcher: (languageCode, errorMessage) => {
    logger.log(`${errorMessage} <-- happened while storing ${languageCode}`);
    return false;
  }
});

const setUserLanguageCode = pipe(
  getLanguageCode,
  languageCode => storeLanguageCode(langaugeCode), // or just storeLanguageCode
  storedSuccessfully => ({ storedSuccessfully })
);

// setUserLanguageCode("en-US") will work as before.

If you’re unfamiliar to the usage of pipe, check out my earlier article on a practical guide to writing functional javascript. TLDR is that it’s a reverse _compose()_.

Again we can see that our functional construct has forced us to break out the unsafe part of our code into a different function. Furthermore, we have ended up with 3 discrete functions that we can pipe() together to get our end result.

The benefits I explained earlier apply here as well. Of which the most important is readability. Now when someone reads your setUserLangage function, they don’t have to take the cognitive burden of parsing the try-catch upfront, because that is encapsulated in an aptly named storeLanguageCode function.

Closing notes

I don’t advocate writing things in conditionally and tryCatch just for the sake of doing so. Sometimes, a simple ternary operation or a vanilla if-else keeps things perfectly readable. But, I personally try to follow a convention as much as I can. Conventions allow developers to make fewer decisions and conserve brain power.

And conditionally and tryCatch makes a lot of good decisions for me by default.

Small functions considered harmful lists the opposite view to this approach. I don’t fully agree with some of the things in that article, and some of it becomes just doesn’t hold any water in an FP paradigm. Nevertheless, I implore you to go and read it.

There are no absolutes in software engineering. No, not even DRY. As always, keep exploring and use your best judgement.

Sometime back when the “Flow vs. Typescript” debate was raging, I had to pick a side. And I picked Typescript. Fortunately, that was one of the better decisions I made. When I had to make that decision, ultimately what convinced me was Typescript’s support for call-time generics.

Today, let me try to walk you through what generics accomplish and how it helps us write safer, cleaner and more maintainable code.

Example #1: Asserting a simple type

Let’s say we need a function that takes any value and puts that into an object. A naive implementation of this in Typescript would look and run like:

const wrapInObj = (myValue: any) => {
  return {
    value: myValue,
  }
}

const wrappedValue = wrapInObj(12345);

wrappedValue.value.split(); // TypeError: wrappedValue.value.split is not a function 🤒

So much for type-safety.

It’s true that myValue can be of any type. But what we need to tell the controller is that the output of the function, although it cannot be foreseen as the developer writing the code, can be “inferred” by the type of the input type. In other words, we can have a “generic definition” of what the output is.

Generic implementation of the above function would be something like this:

const wrapInObj = <T>(myValue: T) => {
  return {
    value: myValue,
  }
}

What we’re simply saying is that myValue can have a type of T. It can be “any type” but not any type. In other words, it has a type we care about.

If you try to write the earlier execution in Typescript, you won’t be able to run it, as the compiler gives a helpful warning:

Example #2: Writing idx with Generics

idx is a “Library for accessing arbitrarily nested, possibly nullable properties on a JavaScript object”. It’s especially useful when you work with complex Javascript objects like REST API responses that may have nullable fields.

type User = {
  user?: {
    name: string,
    friends?: Array<User>,
  }
};

// to safely get friends or friends, we have to write:
const friendsOfFriends = 
      props.user &&
      props.user.friends &&
      props.user.friends[0] &&
      props.user.friends[0].friends;
      
// or, if we use idx:
const friendsOfFriends = idx(props, _ => _.user.friends[0].friends);

If you don’t mind me oversimplifying this a bit, it accomplishes this by basically trying the function given as the second parameter with props. If it fails, it catches and returns an undefined value safely, without throwing.

Again, a naive implementation of this would be:

export const idx = (
  props: any,
  selector: (props: any) => any
) => {
  try {
    return selector(props);
  } catch (e) {
    return undefined;
  }
};

const props = {
  user: {
    name: "ipso",
    friends: [{
      name: "facto",
      friends: []
    }]
  }
}

const friendsOfFriends = idx(props, _ => _.user.noBueno) // Typescipt doesn't complain

But, if we’re a bit clever with generics, we can get Typescript to help us with this.

export const idx = <T extends {}, U>(
  props: T,
  selector: (props: T) => U | undefined
) => {
  try {
    return selector(props);
  } catch (e) {
    return undefined;
  }
};

We’ve introduced two generic types here.

T for the input type, and we “hint” that it’s an object by saying T extends {}. U is for the output type. And with these, we can express that the selector function is something that takes T and returns U of undefined.

Now if you attempt to write the same code as before with this definition of idx, you will get a compile error:

Example #3: Using type inference and generics to get the return type of a function

Suppose that I have a function, and I need to supply the consumer with the type of output. If I call this type FooOutput, I’ll write something like:

const foo = (value: string) => {
  return {
    input: value,
    time: Date.now(),
    characters: value.split()
  }
}

type FooOutput = {
  input: string;
  time: number;
  characters: Array<string>;
}

But by using generics and type inference, I can write a ReturnType generic type, that can “infer” the return type of a function:

type ReturnType<T extends (...args: any[]) => any> = 
    T extends (...args: any[]) => infer R ? R : any;

We’re playing with a T extends (...args: any[]) => any here. This just means that T is a generic function type that takes any number of any arguments and produces a value. Then we use it to infer another type R, and return it.

Using this, I avoid the need to write my return type in the above example manually. Since foo is a function and I need that function’s type to use ReturnType, I’ve to get the type of foo by using typeof.

Helpful utilities in my toolbox 🛠

I use a bunch of these utilities in everyday programming. Most of the utility generics are defined in the typescript lib.es5.d.ts over here. Some of my most-used ones include:

/**
 * Make all properties in T optional
 */
type Partial<T> = {
    [P in keyof T]?: T[P];
};

/**
 * Make all properties in T required
 */
type Required<T> = {
    [P in keyof T]-?: T[P];
};

/**
 * Make all properties in T readonly
 */
type Readonly<T> = {
    readonly [P in keyof T]: T[P];
};

/**
 * From T, pick a set of properties whose keys are in the union K
 */
type Pick<T, K extends keyof T> = {
    [P in K]: T[P];
};

/**
 * Construct a type with a set of properties K of type T
 */
type Record<K extends keyof any, T> = {
    [P in K]: T;
};

/**
 * Exclude from T those types that are assignable to U
 */
type Exclude<T, U> = T extends U ? never : T;

/**
 * Extract from T those types that are assignable to U
 */
type Extract<T, U> = T extends U ? T : never;

/**
 * Exclude null and undefined from T
 */
type NonNullable<T> = T extends null | undefined ? never : T;

Hopefully, this helps you grasp Typescript generics a bit more. If you have questions, don’t hesitate to leave a question down below.

Render Props is an increasingly popular method of sharing code between react components. They are called Render Props (or render-props) because they allow “sharing code” via a render prop. But for this exercise, we’ll do the same while sharing code with react’s own children prop for aesthetics.

Because of that, the title can be more aptly renamed as “Code-sharing in React using functional children props”.

The problem

Since React embraces FP-style composition over inheritance, in the early days, the composition was mostly done using [compose](https://github.com/acdlite/recompose/blob/master/docs/API.md#compose) and statically. And libraries like recompose made it really easy and straightforward. If you wanted to write components that were stateless and pure functions, you would write something like:

// https://github.com/acdlite/recompose/blob/master/docs/API.md

const enhance = compose(
  withState('value', 'updateValue', ''),
  withHandlers({
    onChange: props => event => {
      props.updateValue(event.target.value)
    },
    onSubmit: props => event => {
      event.preventDefault()
      submitForm(props.value)
    }
  })
)

const Form = enhance(({ value, onChange, onSubmit }) =>
  <form onSubmit={onSubmit}>
    <label>Value
      <input type="text" value={value} onChange={onChange} />
    </label>
  </form>
)

This was mostly better than sprinkling this.setState everywhere as it made it easy for components to be “dumb and pure” and logic to be abstracted to seperate functions that could be tested independently. But it had it’s own problems.

1. Who put what in props?

Since so many things are injected via “props”, it becomes harder to reason which composition injected what as the component becomes larger.

2. Re-location of logic

Co-located code is easier to read. But since some of my logic is re-located to a place far away from it’s actual (and often times only) usage, it becomes harder to read things.

3. Lack of type inference

This was my personal favorite gripe. As a heavy user of Typescript, I ended up spending a lot of time trying to find a way to have robust types without writing a lot of my own types — but had little to no return from it.

A solution

Instead of providing re-usable logic via props, what if we can provide it via a children function?

Let’s say I want the current date, updated every second to be shared across my React app. I’d write a LiveDate component like this:

class LiveDate extends React.Component {
  componentDidMount() {
    // update state every second with
    // the current time    
    setInterval(() => {
      this.setState({
        liveDate: new Date().toISOString(),
      });
    }, 1000);
  }

  render () {
    return this.props.children({
      date: this.state.liveDate || "loading..."
    });
  }
}

Normally, children would be a JSX expression. But here we assume children to be function. And the caller of that function would get the { date: “…” } object.

const LiveDateDisplay = () => (
  <div>
    <p>Time is:</p>
    <p>
      <LiveDate>
        {
          (liveDate) => liveDate.date
        }
       </LiveDate>
     </p>
   </div>
);

Albeit, this is such a simple example. But see if we share code this way, it solves all 3 of the above problems.

  1. Re-usable logic is provided to the consumer as a React component, not polluting the props.
  2. We know who provides liveDate as it’s very much co-located.
  3. Type inference needs next to no effort because type systems can infer the type of liveDate based on the definition of LiveDate component.

In the wild

A lot of the libraries that you might be using right now would support render-props out of the box. Route component of the popular react-router lets you access properties like:

// https://github.com/ReactTraining/react-router/blob/master/packages/react-router/docs/api/Route.md#children-func

const ListItemLink = ({ to, ...rest }) => (
  <Route path={to}>
    {({match}) => (
      <li className={match ? "active" : ""}>
        <Link to={to} {...rest} />
      </li>
    )}
  </Route>
);

react-powerplug is something that I use fairly heavily to do state management in my apps.

// https://github.com/renatorib/react-powerplug

import { State, Toggle } from 'react-powerplug'
import { Pagination, Tabs, Checkbox } from './MyDumbComponents'

<State initial={{ offset: 0, limit: 10, totalCount: 200 }}>
  {({ state, setState }) => (
    <Pagination {...state} onChange={(offset) => setState({ offset })} />
  )}
</State>

<Toggle initial={true}>
  {({ on, toggle }) => (
    <Checkbox checked={on} onChange={toggle} />
  )}
</Toggle>

Declarative animation? react-morph has got you covered.

// https://github.com/brunnolou/react-morph

<ReactMorph>
  {({ from, to, fadeIn, go }) => (
    <div>
      <a onClick={() => go(1)}>
        <strong {...from("title")}>ReactMorph 🐛</strong>
        <br />
        <p {...from("description")}>Morphing transitions was never so easy!</p>
      </a>

      <div>
        <h1 {...to("title")}>ReactMorph 🦋</h1>
        <br />
        <h2 {...to("description")}>Morphing transitions was never so easy!</h2>

        <a onClick={() => go(0)} {...fadeIn()}>
          Back
        </a>
      </div>
    </div>
  )}
</ReactMorph>

But render-props do come with their own problems. But for the 95% of use cases, they work very well over traditional composition methods. React Hooks solves most of these problems. In the meantime, for a list of other libraries that adopted the convention, check out this list.

Functional programming is great. With the introduction of React, more and more JavaScript front-end code is being written with FP principles in mind. But how do we start using the FP mindset in the everyday code we write? I’ll attempt to use an everyday code block and refactor it step by step.

Our problem: A user who comes to our /login page will optionally have a redirect_to query parameter. Like /login?redirect_to=%2Fmy-page. Note that %2Fmy-page is actually /my-page when it’s encoded as the part of the URL. We need to extract this query string, and store it in local storage, so that once the login is done, user can be redirected to the my-page.

Step #0: The imperative approach

If we had to express the solution in the simplest form of issuing a list of commands, how would we write it? We will need to

  1. Parse the query string.
  2. Get the redirect_to value.
  3. Decode that value.
  4. Store the decoded value in localStorage.

And we have to put try catch blocks around “unsafe” functions as well. With all of that, our code block will look like:

function persistRedirectToParam() {
  let parsedQueryParam

  try {
    parsedQueryParam = qs.parse(window.location.search) // https://www.npmjs.com/package/qs
  } catch (e) {
    console.log(e)
    return null
  }

  const redirectToParam = parsedQueryParam.redirect_to

  if (redirectToParam) {
    const decodedPath = decodeURIComponent(redirectToParam)

    try {
      localStorage.setItem('REDIRECT_TO', decodedPath)
    } catch (e) {
      console.log(e)
      return null
    }

    return decodedPath
  }

  return null
}

Step #1: Writing every step as a function

For a moment, let’s forget the try catch blocks and try expressing everything as a function here.

// let's declare all of the functions we need to have

const parseQueryParams = query => qs.parse(query)

const getRedirectToParam = parsedQuery => parsedQuery.redirect_to

const decodeString = string => decodeURIComponent(string)

const storeRedirectToQuery = redirectTo =>
  localStorage.setItem('REDIRECT_TO', redirectTo)

function persistRedirectToParam() {
  // and let's call them

  const parsed = parseQueryParams(window.location.search)

  const redirectTo = getRedirectToParam(parsed)

  const decoded = decodeString(redirectTo)

  storeRedirectToQuery(decoded)

  return decoded
}

When we start expressing all of our “outcomes” as results of functions, we see what we can refactor out of our main function body. When that happens, our function becomes much easier to grok, and much easier to test.

Earlier, we would have tested the main function as a whole. But now, we have 4 smaller functions, and some of them are just proxying other functions, so the footprint that needs to be tested is much smaller.

Let’s identify these proxying functions, and remove the proxy, so we have a little bit less code.

const getRedirectToParam = parsedQuery => parsedQuery.redirect_to

const storeRedirectToQuery = redirectTo =>
  localStorage.setItem('REDIRECT_TO', redirectTo)

function persistRedirectToParam() {
  const parsed = qs.parse(window.location.search)

  const redirectTo = getRedirectToParam(parsed)

  const decoded = decodeURIComponent(redirectTo)

  storeRedirectToQuery(decoded)

  return decoded
}

Step #2: An attempt at composing functions

Alright. Now, it seems like the persistRedirectToParams function is a “composition” of 4 other functions. Let’s see whether we can write this function as a composition, thereby eliminating the interim results we store as consts.

const getRedirectToParam = parsedQuery => parsedQuery.redirect_to

// we have to re-write this a bit to return a result.
const storeRedirectToQuery = redirectTo => {
  localStorage.setItem('REDIRECT_TO', redirectTo)
  return redirectTo
}

function persistRedirectToParam() {
  const decoded = storeRedirectToQuery(
    decodeURIComponent(getRedirectToParam(qs.parse(window.location.search)))
  )

  return decoded
}

This is good. But I feel for the person who reads this nested function call. If there was a way to untangle this mess, that’d be awesome.

Step #3: A more readable composition

If you’ve done some redux or recompose, you’d have come across compose. Compose is a utility function which accepts multiple functions, and returns one function that calls the underlying functions one by one. There are other excellent sources to learn about composition, so I won’t go into detail about that here.

With compose, our code will look like:

const compose = require('lodash/fp/compose')
const qs = require('qs')

const getRedirectToParam = parsedQuery => parsedQuery.redirect_to

const storeRedirectToQuery = redirectTo => {
  localStorage.setItem('REDIRECT_TO', redirectTo)
  return redirectTo
}

function persistRedirectToParam() {
  const op = compose(
    storeRedirectToQuery,
    decodeURIComponent,
    getRedirectToParam,
    qs.parse
  )

  return op(window.location.search)
}

One thing with compose is that it reduces functions right-to-left. So, the first function that gets invoked in the compose chain is the last function.

This is not a problem if you’re a mathematician, and are familiar with the concept, so you naturally read this right-to-left. But for the rest of us familiar with imperative code, we would like to read this left-to-right.

Step #4: Piping and flattening

Luckily, there’s pipe. pipe does the same thing that compose does, but in reverse. So, the first function in the chain is the first function processing the result.

Also, it seems as if our persistRedirectToParams function has become a wrapper for another function that we call op. In other words, all it does is execute op. We can get rid of the wrapper and “flatten” our function.

const pipe = require('lodash/fp/pipe')
const qs = require('qs')

const getRedirectToParam = parsedQuery => parsedQuery.redirect_to

const storeRedirectToQuery = redirectTo => {
  localStorage.setItem('REDIRECT_TO', redirectTo)
  return redirectTo
}

const persistRedirectToParam = fp.pipe(
  qs.parse,
  getRedirectToParam,
  decodeURIComponent,
  storeRedirectToQuery
)

// to invoke, persistRedirectToParam(window.location.search);

Almost there. Remember, that we conveniently left our try-catch block behind to get this to the current state? Well, we need some way to introduce it back. qs.parse is unsafe as well as storeRedirectToQuery. One option is to make them wrapper functions and put them in try-catch blocks. The other, functional way is to express try-catch as a function.

Step #5: Exception handling as a function

There are some utilities which do this, but let’s try writing something ourselves.

function tryCatch(opts) {
  return args => {
    try {
      return opts.tryer(args)
    } catch (e) {
      return opts.catcher(args, e)
    }
  }
}

Our function here expects an opts object which will contain tryer and catcher functions. It will return a function which, when invoked with arguments, call the tryer with the said arguments and upon failure, call the catcher. Now, when we have unsafe operations, we can put them in the tryer section and if they fail, rescue and give a safe result from the catcher section (and even log the error).

Step #6: Putting everything together

So, with that in mind, our final code looks like:

const pipe = require('lodash/fp/pipe')
const qs = require('qs')

const getRedirectToParam = parsedQuery => parsedQuery.redirect_to

const storeRedirectToQuery = redirectTo => {
  localStorage.setItem('REDIRECT_TO', redirectTo)
  return redirectTo
}

const persistRedirectToParam = fp.pipe(
  tryCatch({
    tryer: qs.parse,
    catcher: () => {
      return {
        redirect_to: null, // we should always give back a consistent result to the subsequent function
      }
    },
  }),
  getRedirectToParam,
  decodeURIComponent,
  tryCatch({
    tryer: storeRedirectToQuery,
    catcher: () => null, // if localstorage fails, we get null back
  })
)

// to invoke, persistRedirectToParam(window.location.search);

This is more or less what we want. But to make sure the readability and testability of our code improves, we can factor out the “safe” functions as well.

const pipe = require('lodash/fp/pipe')
const qs = require('qs')

const getRedirectToParam = parsedQuery => parsedQuery.redirect_to

const storeRedirectToQuery = redirectTo => {
  localStorage.setItem('REDIRECT_TO', redirectTo)
  return redirectTo
}

const safeParse = tryCatch({
  tryer: qs.parse,
  catcher: () => {
    return {
      redirect_to: null, // we should always give back a consistent result to the subsequent function
    }
  },
})

const safeStore = tryCatch({
  tryer: storeRedirectToQuery,
  catcher: () => null, // if localstorage fails, we get null back
})

const persistRedirectToParam = fp.pipe(
  safeParse,
  getRedirectToParam,
  decodeURIComponent,
  safeStore
)

// to invoke, persistRedirectToParam(window.location.search);

Now, what we’ve got is an implementation of a much larger function, consisting of 4 individual functions that are highly cohesive, loosely coupled, can be tested independently, can be re-used independently, account for exception scenarios, and are highly declarative. (And IMO, they’re a tad bit nicer to read.)

There’s some FP syntactic sugar that makes this even nicer, but that’s for another day.