How To Archives - Nightingale | Nightingale | Nightingale The Journal of the Data Visualization Society Fri, 20 Mar 2026 17:43:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://i0.wp.com/nightingaledvs.com/wp-content/uploads/2021/05/Group-33-1.png?fit=29%2C32&ssl=1 How To Archives - Nightingale | Nightingale | Nightingale 32 32 192620776 Building Tableau Dashboards for the PowerPoint Download https://nightingaledvs.com/building-tableau-dashboards-for-the-powerpoint-download/ Thu, 26 Mar 2026 12:00:00 +0000 https://nightingaledvs.com/?p=24662 Working in reporting and analytics for the last six years has made me realize an uncomfortable truth about Tableau: Your beautiful interactive dashboard will often..

The post Building Tableau Dashboards for the PowerPoint Download appeared first on Nightingale.

]]>
Working in reporting and analytics for the last six years has made me realize an uncomfortable truth about Tableau: Your beautiful interactive dashboard will often become a static PowerPoint slide.

If you work in sales ops, finance, or any executive-facing analytics team, you already know this. Your vice president  won’t open Tableau Server at 9 a.m. before the board meeting. They’ll download your dashboard as an image or  powerpoint, paste it into slide 17, and present it to the C-suite.

Once I accepted this reality, I started treating this as a design problem. Here are five non-negotiable factors I learned on my Tableau journey.

The first Excel dashboard, created in 1990 using the first version of Excel for Windows. Source: Microsoft

1. Design for PowerPoint From Day One

Device preview matters exponentially more when your dashboard will live in a powerpoint deck.

In the early stages of redesigning an executive-level sales report, I built my dashboard in Tableau’s default “Desktop Browser” view. When I downloaded it as PowerPoint, it crushed into a single slide with illegible text — a formatting disaster right before a leadership presentation.

The fix here is using Tableau’s built-in PowerPoint layout (16:9 aspect ratio) from day one.

Source: Rituparna Das

This ensures your dashboard fits perfectly into standard Google Slides or PowerPoint without awkward cropping or white space. Don’t design for Tableau’s default dimensions — design for where your dashboard will actually be consumed.

Pro tip: Always test your export before the final version. Click “Dashboard > Export as PowerPoint” to preview exactly what stakeholders will see.

2. Accept That 80% of Functionality Disappears

This is the hardest lesson: You must build assuming zero interactivity.

What dies in PowerPoint:

  • Filters (static view only)
  • Parameters (whatever was selected during download)
  • Hover tooltips (invisible)
  • Drill-downs (gone)
  • Dashboard actions (non-functional)

This changes your design strategy. Now you have to build multiple static versions of what each filter setting your users will want to view. For example, my executives were interested in seeing  pipeline performance across sales regions, sales clusters, business units, and product lines. What would have been one dashboard filter is now separate dashboards I had to create:

  • “Pipeline_Review_by_Sales_Region”
  • “Pipeline_Review_by_Sales_Cluster”
  • “Pipeline_Review_by_Business_Unit”
  • “Pipeline_Review_by_Product_Line”

Yes, it’s more work. Yes, it feels redundant. But it’s the only way to ensure your stakeholders see what they need without interactivity.

Every critical insight must be visible on page load. If it requires a click to reveal, assume it will never be seen.

3. Use Containers for Layout Control

When your dashboard contains multiple visualizations, containers keep everything locked in place during the PowerPoint export. Without them, floating objects shift unpredictably — your perfectly aligned KPI cards end up overlapping your bar chart in the downloaded version.

PowerPoint downloads don’t tolerate white space. A minimalist Tableau dashboard might look elegant on screen, but it looks unfinished and unprofessional in a deck. Executives expect dense, information-rich slides.

Why containers solve both problems:

  • They lock your layout in place (no shifting elements)
  • They help you maximize space efficiently (no awkward gaps)
  • They give you precise control over how information flows
Source: Rituparna Das

This dashboard exports with excessive white space, making it look unprofessional in decks.

Best practice workflow:

  1. Create a low-fidelity mockup of your dashboard layout
  2. Build the container structure first (horizontal and vertical containers)
  3. Drop visualizations into containers last

Pro tip: Watch this Tableau container best practices video before building your next dashboard — it’ll save you hours of reformatting frustration.

4. Establish Governance Standards for Version Control and Collaboration

If you’re working collaboratively or managing multiple dashboard versions, implement a simple visual system:

Source: Rituparna Das

Use the color coding available for dashboards:

  • 🟢 Green : Production-ready, safe to download
  • 🟡 Yellow : Work in progress, do not present
  • 🔴 Red : Draft/testing only

Keep consistent and clear worksheet naming conventions. This will save your sanity.)

❌ DON’T: “Bookings (1)”, “Bookings (1)(1)”, “Sheet 3”
✅ DO: “Q4_Bookings_Final”, “Pipeline_Review_v3”, “Pipeline Coverage_BarChart”

5. Add Company Logos

Align as closely as possible to your organization’s standard slide deck template.

Why this matters: Your dashboard might be internal today, but it’ll be in a client presentation tomorrow. When your VP forwards it externally without asking you first (and they will), professional branding matters.

Where to place logos:

  • Top-left or top-right corner (consistent with company templates)
  • Footer with date/data source
  • Consider adding a “confidential” watermark for internal metrics

The Bottom Line

The moment you accept that your Tableau dashboard will become a PowerPoint slide, you start designing better dashboards.

Stop optimizing for interactivity. Start optimizing for screenshots.

Use the 16:9 layout. Build static versions of filtered views. Lock everything in containers. Name your worksheets like a professional. Add your company logo.

Your stakeholders don’t care about your elegant parameter actions if they can’t paste your dashboard into their Monday morning deck.

Sometimes being a great analyst means accepting that your masterpiece will be Ctrl+C’d, Ctrl+V’d into slide 23 — and designing for that reality from the start.

CategoriesHow To

The post Building Tableau Dashboards for the PowerPoint Download appeared first on Nightingale.

]]>
24662
The Tiles That Made Me: Mapping Friendship through the Lens of AI https://nightingaledvs.com/the-tiles-that-made-me/ Thu, 19 Mar 2026 12:00:00 +0000 https://nightingaledvs.com/?p=24653 According to the Oxford Dictionary, friendship is a “voluntary, personal relationship characterized by mutual affection, trust, and support.” Whereas to me, friendship involves authenticity and..

The post The Tiles That Made Me: Mapping Friendship through the Lens of AI appeared first on Nightingale.

]]>
According to the Oxford Dictionary, friendship is a “voluntary, personal relationship characterized by mutual affection, trust, and support.” Whereas to me, friendship involves authenticity and a trustworthy partnership that involves fun, kindness, and understanding.

It’s the size of the smile on your face when you see someone. It’s the decision to stay in touch with a niece long after family events end. It’s the fragile silence between you and a friend who couldn’t support a recent life choice.

As a data designer, I’ve always been obsessed with how we categorise the intangible. Recently, I set out to map the people who have shaped me. I didn’t want a balance sheet, but I did want to see the patterns. A relationship always evolves; this would only represent a snapshot in time.

The Taxonomy of Connection

I began by listing every person I care about. First from memory, then verified by my friends list on Facebook. But as I opened my spreadsheet, the questions started to flood in. Can family members count as friends? For example, my nieces and I have been chatting nonstop for years now. We grew fond of each other through the circumstance of birth, but we stayed in touch by choice. Does that make them friends? And what about friends who aren’t supportive of my life choices? We were very close 7-8 months ago, but we are not now. Are we still friends? If I exclude her from this, does that mean I have given up on our friendship? Also, I use the term “friend” very loosely. I am naturally familiar with strangers. Is my new neighbour — with whom I have shared a few cups of tea — my friend?

To make sense of the friend list, I distilled friendship into three core metrics, scored on a scale of one to three, three being the highest rank possible: 

  • Reliability: Loyalty, faithfulness, and the feeling of being safe.
  • Empathy: Supportiveness, kindness, and open communication.
  • Joy: Playfulness, liveliness, and shared common ground (though one might question whether friendship is required for common ground; for the sake of this visualisation, I decided it was).

I also added two judgment values: Duration (how long we have been friends), and Contact (how recently we spoke). To keep the data honest, I limited the scope to friends I had contact with in the last 24 months. I chose 24 months as a mark because it’s the period since my daughter was born. Spoiler alert: In a time when I often felt lonely as a new mother, the data showed me I was actually deeply loved.

From Sketching to Scripting

In my notebook, the design evolved rather quickly into a series of “tiles.” I remember having the visual in my head for a while, and I felt as if I were a vessel letting it out onto the paper. I wanted something that would represent the scale’s levels easily. Level one was a simple base; level three added complex detail. 

Source: Or Misgav

Initially, I used background colors to denote duration, but the palette was too loud. It made the story about “how good I am at making friends” rather than “how these friendships built me.”

Source: Or Misgav

Then came the pivot. Usually, I build these visualizations by clicking the mouse. A thorough process of copying, pasting, and double-checking layers in Illustrator and Figma would easily take three hours. But, inspired by the “vision to execution with a click” movement, I turned to Claude and Gemini.

I asked Gemini to help me write the prompt for Claude. It generated a Python script that processed my Excel file and generated stacked layers as PNG files. Claude taught me how to install Python on my Mac. (Honestly, I felt like I was back in the 90s, typing into a terminal to launch a game.) Then, “Boom. Your tiles are ready.” With a single click, the assets were generated. A few back-and-forths with Claude, and the grid was aligned. The work was done.

Source: Or Misgav

The Cost of Efficiency

As I looked at the finished folder, a strange feeling washed over me: I didn’t recognize the data. By automating the execution, I had accidentally bypassed the data familiarization stage — that meditative hour where you handle each data point with care and remember the person behind it. The tiles were beautiful, but they felt distant.

It raised a fundamental question for our field:
If the AI builds the layers, are we co-creators? Or are we just curators of our own memories?

End Result. Source: Or Misgav
How to read. Source: Or Misgav

The Tokens of Gratitude

Despite the digital distance, the final grid is a testament to my life. These tiles are me. They represent the people who stayed through puberty, the ones who signed my wedding book, and the new friendship that started when I collected my son from preschool, which grew close.

This project is more than a visualization; it’s a token of gratitude. It captures a snapshot of my soul as it exists in 2026. Shaped by humans, rendered by machines, and held together by the voluntary, personal relationships that make life worth mapping.

CategoriesData Art

The post The Tiles That Made Me: Mapping Friendship through the Lens of AI appeared first on Nightingale.

]]>
24653
LA on the Move: Data Vandals Bring Wildlife and Humans Together at Union Station https://nightingaledvs.com/la-on-the-move/ Mon, 27 Oct 2025 14:30:08 +0000 https://dvsnightingstg.wpenginepowered.com/?p=24289 The relationship between nature and the city is often framed as a tension - wilderness versus concrete, animals versus humans. But what if we looked at Los Angeles differently? What if we saw the city as a shared habitat where humans and wildlife navigate the same streets, highways, and neighborhoods together?

The post LA on the Move: Data Vandals Bring Wildlife and Humans Together at Union Station appeared first on Nightingale.

]]>
The relationship between nature and the city is often framed as a tension—wilderness versus concrete, animals versus humans. But what if we looked at Los Angeles differently? What if we saw the city as a shared habitat where humans and wildlife navigate the same streets, highways, and neighborhoods together?

“LA on the Move”, our exhibition organized by Metro Art at Union Station in Los Angeles, California, opened in October and will remain on view through next year. Through larger-than-life graphics, a massive 3D map, playful character designs, and even animal sounds, we’ve created an immersive experience that asks Angelenos to see themselves reflected in the lives of coyotes, mountain lions, monarch butterflies, red-tailed hawks, and california kingsnakes.

From City Animals to Union Station

Details from the Data Vandals workshop

The seeds of “LA on the Move” were planted at ArtCenter College of Design, where we first encountered the City Animals class taught by Santiago Lombeyda and Ivan Cruz. “It was a topic I hadn’t really thought about before,” Jen recalls. “The interaction of humans and animals in LA County—it was super intriguing. The more we got to know the projects and the students, the more excited we became.” Then there was a chance to have an exhibition that pulled together a lot of these concepts and also showcased the student work, created in association with Metro Arts at Union Station. From there it just started rolling”.

The final projects from the City Animals class focused on speculative projects that explored how Angelenos could redesign their homes, backyards, and neighborhoods to better integrate with the natural world. Jason explains, “The projects that the students did were really about how people in LA could think about the intersection of the built environment—their homes, their yards, their backyards—with the natural world”. From there, we led two intensive workshop sessions with the students, working side by side to visualize ecological data in bold, accessible ways that were displayed in the ArtCenter student center for the following month.

From there, we were connected with Arroyos & Foothills Conservancy, a non-profit organization focused on preserving and restoring natural open spaces and wildlife habitats. They became an essential partner, sharing datasets on animal sightings, migration patterns, and habitat corridors across LA County as well as expert advice and access to Southern California’s environmental researchers.

The research process: Data meets daily life

“I think the first thing that we did, and what we always do, is begin with research,” Jen explains. “but in time, we leaned on the expertise of our friends at Arroyos & Foothills Conservancy—they were incredibly helpful. The other part, I think that’s very important, is collecting anecdotal information when you’re talking to people that live in Los Angeles about their experiences”.

For us, stepping away from the data is essential. “It’s important to step away from the facts and the figures, and start talking to people because the experience that Californians have with wildlife is completely different than a New Yorker’s,” Jen says. “You can’t just go about your business like a city dweller and ignore nature in California. It’s integrated into your day-to-day experience”.

Los Angeles, we discovered, is extraordinary in its biodiversity. Jason notes, “Los Angeles has such a unique environment. And what we found is that it’s actually one of the three areas in the world that is considered a biodiversity hotspot“. This became a cornerstone of the exhibition—LA isn’t just a city with some nature on the edges; it’s where wildness lives alongside urbanity in remarkable, sometimes precarious, ways.

Five animals, five stories

We chose to focus the exhibition on five species: coyotes, pumas (mountain lions), red-tailed hawks, california kingsnakes, and monarch butterflies. Each animal became a character in the larger narrative of LA residents navigating neighborhoods, dating scenes, commutes, and survival just like the humans around them.

Photo courtesy of Metro Art

“One of the first things that you drew was the coyote that says: ‘I love LA.’ That’s one of the featured images in the show,” Jason recalls. For Jen, this illustration became a statement of intent: “A human says, I love LA—and we all know this phrase—but animals live there too. What’s their role in this? So, we wanted to make sure that the animals and humans get equal time in this show”.

The personification of the animals was deliberate and humorous. Jen explains, “The more you learn about animals, how they’re mating with other animals, for instance, you think about the LA dating scene, and then you think about animals, which have some funny crossovers. As we have these neighborhoods in a city, they also have their neighborhoods.” Jason chimes in, “For example, a monarch butterfly says, ‘Hey babe, let’s overwinter in Mexico’—a line that could just as easily come from an Angeleno planning a winter getaway…” Jen adds, “And the monarch is saying like, I’ve got a really busy schedule.” Jason elaborates: “They have this multi-generational migration habit where up to five generations of butterflies will go from Central Mexico all the way up to Nova Scotia and Southern Canada and then back again. And they do this over five different generations. Even more remarkable—five generations later they’ll return to the same tree”.

The California kingsnake became another favorite. “Well, it’s not an LA Dodgers hat. Thank you very much,” Jen jokes, describing the snake’s illustrated headwear. “It’s a Los Angeles hat”. The kingsnake’s ability to live almost anywhere—from woodland to wetlands to suburban basements—made it a perfect symbol of LA’s adaptability. As we say, “you live in my backyard.”

Navigating the hard truths

Panel telling the story of P22

While humor runs through the exhibition, we didn’t shy away from difficult realities. Rattlesnakes, for instance, posed a design challenge. “I made this drawing. When you might be on a hike, you may encounter a rattlesnake. And this is frightening, right?” Jen recalls. “There was like a discussion about making the rattlesnake so it wasn’t so intimidating, which was funny because I was like, well, a rattlesnake is intimidating and very scary, and you can’t really take animals and smooth out all the rough edges, right? Because that’s not what they are”!

The story of P-22, the famous mountain lion, underscored the fragility of human-wildlife interactions. Jason reflects, “Take the story of P-22—a famous mountain lion that was known around the Mount Wilson Observatory. And eventually, through a series of interactions with humans (and despite best intentions) he dies”. The exhibition addresses this directly, including data on rat poison’s devastating impact on mountain lions and the importance of hazing techniques—like carrying a can filled with coins—to maintain healthy boundaries.

“Even though we anthropomorphized the animals, we shouldn’t forget the fact that there are negative results of some of our interactions with the animals. We should be mindful of that”.

Making data visible and inviting

One of our core practices is taking complex datasets and transforming them into visuals that invite exploration rather than intimidation. “Part of what we do is find information and basically make it much more understandable to the general public and to ourselves,” Jen explains. “Like rat poison killing pumas, right? We made this diagram so that we have the data there, but you can just see it more clearly”.

A standout piece in the exhibition is the massive chart “Animal Species at Risk in California”, which visualizes 930 species by class and phylum, showing which are extinct, endangered, or imperiled. Working with data visualization collaborator Paul Buffa, we transformed this overwhelming dataset into the shape of a California poppy—the state’s native flower.

“If I saw this information in spreadsheets, I would be very intimidated because it’s just a lot of information,” Jen admits. “But since we put it into this California poppy, which is a native plant, it invites you over to explore it. You don’t have to look at every single detail, but it is fascinating”.

The wall also includes a Sankey diagram comparing California’s at-risk species to global standards—revealing that California has considerably more species in danger. And the bar chart showing imperiled species? “It literally towers over your head. It’s about seven and a half feet tall, so we wanted it to have a physical relation to how you encounter the data”.

The iconic title wall: Observing Union Station

The exhibition’s title wall features three illustrated characters walking across a vibrant gradient backdrop—each carrying something that subtly references animal behavior. Jen describes how these characters emerged: “We were standing in Union Station, and I could see people walking through, going from the trains to the entrance, and it gave me this idea about what kind of people would be walking through LA and walking particularly in Union Station”.

The older gentleman carries a bag of groceries, echoing how animals travel to forage and transport food. The young woman holds a bundle of flowers, referencing seed distribution—how seeds attach to animal coats or are eaten and deposited elsewhere. “All said and done, the more time you spend with the exhibition, you know every element is intentional and thought out and has a relationship to the information that we learn as we go along,” Jen explains.

The massive 3D map: Placing yourself in the data

Perhaps the most captivating element of LA on the Move is the enormous 3D map, created in collaboration with Julian Hoffmann Anton. This wasn’t just a cartographic exercise—it became a months-long process of negotiation, expansion, and refinement.

“Every project we do, we discuss a map component,” Jen says. “And sometimes we have time to do it, and sometimes we don’t because what starts as a simple map becomes very complex. It’s because a map is political. You can’t leave anyone off because they’ll notice”.

Initially, the map focused narrowly on downtown LA and Union Station. But through conversations with Metro Arts staff and community input, it expanded dramatically—eventually encompassing all of LA County and parts of Orange and San Bernardino Counties. “We were pushed and pushed on the map, but that’s not a bad thing. It’s a much more inclusive map, so when visitors come to Union Station, they can find themselves”.

In addition to showing every detail of the city, the map tracks sightings of all five featured species across the region, revealing fascinating patterns. Mountain lion sightings appear surprisingly far south of downtown; California kingsnakes cluster in parks and mountains but occasionally show up near Marina Del Rey; while coyote sightings may reflect research centers as much as actual populations.

“I’ve never seen a map of this scale, physically, of this detail,” Jason marvels. “It’s an extremely detailed 3D rendering of the entire metro area”. And because it wraps around a corner, visitors can find neighborhoods that might have been cropped out of a conventional map. Jen describes a photograph of a man pointing to the side panel: “He’s finding himself, which we wouldn’t have had in our original idea”.

Adding Sound: Activating the Space

For the first time in a Data Vandals project, we incorporated audio. “I pushed for this because we wanted to activate the space as much as possible,” Jen says. “We’re dealing with walls, and we wanted ways to expand these rectangles out”.

Visitors can hear the sounds of pumas, coyotes, and hawks. “I thought, okay, if I’m walking through Union Station, what is it like to hear some of these animals?” Jen explains. The sounds are surprising—sometimes beautiful, sometimes unsettling. Jason describes, “The mountain lion has lots of really low growls, more aggressive than a purr, and I found those to be unsettling”. Coyote calls also sound strange and a bit frightening, but these sound elements ground the exhibition in sensory reality, reminding visitors that these are not cartoons but living, breathing neighbors.

Iconic cutouts and LA signage culture

Atop each wall, we placed large cutouts of the animals lifted high on Sintra board to add height and visual drama. Jason says, “We wanted them to refer to the history of the Hollywood back lot, even the Hollywood sign itself.”

Jen reflects on LA’s distinctive signage culture: “I think the signage is very different from anything you ever really see on the East Coast; in New York we don’t have that kind of sign culture and I find it fascinating and really attractive”.

The billboard aesthetic also responds to Union Station’s architecture—a stunning 1930s Art Deco space with soaring ceilings and intricate tilework. “Union Station is so gorgeous, you want to try to do it justice. Something that iconic, you worry that whatever you do is going to be overwhelmed”. To honor the building, we photographed the tile floors and extracted colors to integrate into our palette, creating a dialogue between the historic architecture and our contemporary street-style graphics.


As the exhibition settles into its year-long run, we hope it becomes a recurring destination; a place where commuters pause for five extra minutes, where families return to discover new details, where Angelenos see their neighborhoods reflected in a 3D landscape populated by shared species.

“I just want people to enjoy it and have fun with it and see themselves in the data,” Jen says. “It’s so fun to see the different types of people, and I feel like I could draw those people and put them into the exhibition. It reflects a lot of our intentions”.

Jason hopes for depth and revisitation: “I’d love that the exhibition is very detailed; you can return to it over and over and learn something new each time that you revisit it”. And Jen adds with a laugh, “I hope it brings us back to California again and again –  we love LA “!


“LA on the Move” is on view at Union Station through 2026.

For more information: https://datavandals.com/la-on-the-move.

The post LA on the Move: Data Vandals Bring Wildlife and Humans Together at Union Station appeared first on Nightingale.

]]>
24289
Learning to Read Academic Papers by Making Data Comics https://nightingaledvs.com/learning-academic-papers-making-data-comics/ Thu, 18 Sep 2025 15:03:28 +0000 https://dvsnightingstg.wpenginepowered.com/?p=24195 Learning to read academic papers is a considerable challenge for many college students. Take, for instance, the task of reading a research paper for an..

The post Learning to Read Academic Papers by Making Data Comics appeared first on Nightingale.

]]>
Learning to read academic papers is a considerable challenge for many college students. Take, for instance, the task of reading a research paper for an upcoming class discussion. Students who opt to read the piece from start to end will, at best, encounter unfamiliar technical terms and ideas in an unusual formal writing style. Students who instead want to approach the paper-reading by looking for specific areas of interest face an additional challenge — figuring out where to find the information they’re looking for. For instance, while knowing that authors often summarize their contributions in the abstract, introduction, and discussion might seem obvious to those with practice reading papers, these patterns of what goes where are learned and highly area-specific. As Adam Ruben wrote in his satirical piece about the difficulties of reading academic papers: “Nothing makes you feel stupid quite like reading a scientific journal article.”

So when I taught a new Human-Computer Interaction (HCI) course where many students would be engaging with the field’s literature for the first time, I knew I needed to get creative. I teach Computer Science at Mount Holyoke College, a small, private, predominantly undergraduate liberal-arts college in Western Massachusetts. When I set about the task of designing the new course (an intermediate-level elective for undergraduate Computer Science majors), I set an objective to expose students to the broad assortment of areas in HCI through engaging with exciting new literature — the final weeks of the course would be at the same time as the largest HCI conference in the world (ACM CHI), after all! However, I knew that, while students would be familiar with reading academic texts generally, this might be their first time engaging with Computer Science literature (broadly) and almost certainly their first time reading HCI literature (more specifically). Therefore, I wanted to design an activity which would help students get more comfortable navigating new texts in a way that felt fun and approachable, but would build strong skills they could apply to future readings.

I ultimately designed an activity in which students create data comics as a means to better understand the structure and content of research papers containing human-subject studies. Inspired by past work about how creating data comics (data-driven stories in a comic strip-like form) might benefit researchers, I designed this activity to use the process of creating data comics to benefit readers’ skills.  The big idea is this: in order to create a data comic, a student must both find the pertinent information they need to tell the story of that paper and understand enough of what they’ve found in order to summarize it. Further, because creating a data comic may feel more fun, creative, and low-stakes than other deliverable formats that students are familiar with (e.g., reports or presentations), students may be able to engage with this difficult work with less fear and stress. 

In this short report, I will provide an introduction to data comics generally, explain the activity I designed involving them, and reflect on the opportunities and challenges of conducting this kind of activity.

The figure shows three example comics. The first presents statistics contrasting how many Chinese speakers there are, but how few websites use Chinese. The second has one big panel and explains what a 4.5 Celsius change in the climate would be like. The third sets up the problem of garbage globally, by introducing an industrial ecologist named Roland Geyer and his relationship to the problem.
Data comics are a type of narrative visualization which incorporates data and visualizations into comics. Here are excerpts from three examples (from left to right): “The Future Sounds Like Chinese” by Josh Kramer for The Nib, “4.5 Degrees” by XKCD, and “Humans have made 8.3bn tons of plastic since 1950. This is the illustrated story of where it’s gone” by Susie Cagle for The Guardian. (Images property of their original sources)

What are data comics?

Data comics are a type of narrative data visualization which present a data-driven story in a comic strip-like form. While data comics may look like any other comic strip at first glance, they incorporate visualizations into their data-based narratives, using different combinations of visualizations, (narrative) flow, narration, and words and pictures (see above Figure for examples). 

Data comics can be a particularly powerful tool in educational contexts because they leverage, break down, and communicate potentially complex information in an approachable format. Authors have written about the potentially helpful role of data comics in a variety of contexts including helping people make sense of their personal data and better understanding how to approach unfamiliar visualizations through reading and creating explanatory comics (for a comprehensive survey of data comics in education, see Boucher et al.’s 2023 survey). Further, creating data comics provides students an opportunity to practice both high- and low-order cognitive tasks (e.g., finding and summarizing) in a creative, low-stress context — ingredients which Psychology and Education research tell us contribute to long-term learning.

The application of data comics which most inspired the activity I designed was Wang et al.’s work on data comics as a means to report controlled user studies. In their paper, Wang et al. describe how authors of scientific papers could use data comics as a means to report information about their user studies in a format that might be more accessible to both experts and non-expert readers. While my students aren’t (often) authors of scientific papers, they were readers of them, so I wondered: could making data comics help student readers understand the structure and content of papers? To try this out, I modified Wang et al.’s workshop procedure (described in their publication and the workshop website) into the following activity for students which would be possible to accomplish within a limited class time.

The activity

In this activity, students created a data comic for an existing research paper containing a human-subject study during one 75-minute class session plus a 10-minute pre-class preparatory discussion. While I hoped that students would come out with a good understanding of the paper that they’d read, the primary objective of this activity was to build students’ confidence in finding and summarizing key pieces of information in an academic paper so that they could apply those skills to future reading tasks. To make the initial paper navigation smoother, we spent 10 minutes in the prior class session discussing the sections of a “typical”’ research paper and their high-level purposes. For instance, we talked about how abstracts are a summary of the work as a whole (and thus serve as great overviews), but often do not contain critical details about methods, results, and impacts which can be found elsewhere. The purpose of this discussion was to provide a general road-map for students to recall, apply, and expand in the following class, while they were actually creating their comics.

At the start of the main class session, I introduced the concept of data comics and students explored examples of existing data comics. The goal of this introduction is to help students get an idea of what data comics can feel and look like. We used Bach et al.’s data comic gallery as a starting place, combined with other examples students found elsewhere online. 

Then, I divided students into small groups of two or three and asked each group to pick a paper to convert into data comic form. In the inaugural version of this activity, I selected three short papers from ACM CHI for groups to choose from. Each paper incorporated a human-subject user-study of some kind to connect the exercise to topics students had seen earlier in the course related to human-centered design methods. Although I selected the papers in this iteration, the search process could alternatively have been student-driven with students either independently proposing papers or consulting proceedings together, depending on the goals and time constraints.

Each group was then given several sheets of plain paper and a set of colored markers to create their data comics. While there are lots of great tools for creating digital data comics, I intentionally chose to have students create comics with paper and markers to reduce the friction that comes with learning a new tool and the perceived pressure to try to make something that “looks nice.” This philosophy is consistent with substantial existing work on the benefits of creating paper-based, lo-fi prototypes of visualizations to facilitate idea generation and divergent thinking. Further, I wanted to focus students’ attention on the “fun” of being creative — I’ve learned that college students often don’t get to play with markers in class as much as they might like!

Then, it was time to dive into comic making, focusing on creating comics with a simplified 3-part structure. Given the amount of time students had to create their comics, I asked students to focus on finding the information required to tell a story with the following 3-part structure:

  • Motivation & Question: Explain the researchers’ central research question(s) and why they matter
  • Methodology: Explain what the researchers did to try to answer their research question(s)
  • Results: Explain what the researchers learned from their experiment(s), focusing on the most important outcomes

For each part, students were asked to find the information in the paper in the relevant section(s), summarize the most important pieces of information together as a group, and decide the best way to communicate that summarized information in their comic through a combination of images, text, and visualizations.
At the end of class, each group shared their creations with their classmates. Of the comics created in that inaugural class, several focused on Gui et al.’s paper A Field Study on Pedestrians’ Thoughts toward a Car with Gazing Eyes  (perhaps because of its cute “self-driving car with eyes” concept!). You can see components of three different groups’ comics for this paper in the Figure below.

The figure is divided into three sections, each with a section of a different student comic. There is a cute round car with cartoon-y eyes on the front featured in all three. The first section is labeled "Part 1: Motivation & Questions" and features a comic where students have written: How do pedestrians perceive the physical eye on the car as a communication mode in an uncontrolled real-world setting? Five key findings." The second section is labeled "Part 2: Methodology" and shows a 6-panel comic summarizing the methods the paper used. The final section is labeled "Part 3: Results" and summarizes the paper's results including where students have written "Eyes are IMPORTANT for self-driving cars!!!"
During the activity, students created comics based on ACM CHI papers. Here are sections from three different student groups’ comics based on Gui et al.’s paper “A Field Study on Pedestrians’ Thoughts Toward a Car with Gazing Eyes.”

Possibilities and challenges

Overall, I found the first version of this activity to be quite successful, in terms of both positive student reception and accomplishing learning goals. While most groups did not produce complete, polished comics in the 75 minute session, they all engaged with their chosen paper deeply over the session and wrestled both with the format and content in productive ways. 

Additionally, students reported that they loved this activity: in their end-of-week reflections, they repeatedly described the activity as the highlight of their week. Students’ comments indicated that they found it both fun and extremely helpful for furthering their understanding of how to approach and read papers in the future, emphasizing that the act of creating something new based on the reading was particularly impactful. While these initial impressions were volunteered as a part of a broader weekly reflection assignment for the course (and thus may not reflect all student reactions or reflections), they indicate that this activity was a positive experience overall for many. I plan to collect more systematic feedback from students regarding what precisely worked (and didn’t work) when I repeat this activity again.

Despite my students’ generally positive reaction, there are certainly challenges to conducting this kind of activity which I’d suggest readers think about if they are considering doing something similar in their own context.

Allocating the Right Amount of Time

First, selecting the right amount of time for this activity can be a challenge. In the initial version of this activity, my students made their data comics over the course of one 75-minute class session, supplemented with a short 10-minute introductory lesson in the prior class. Though I do think students accomplished enough deep work in this time to ultimately improve their reading skills, few of them came away with a fully complete comic. Additionally, while students shared their comics with their classmates, we did not have time for students to give each other feedback on their comics or for students to refine their comics based on that feedback. As discussed by Boucher et al., engaging in these kinds of feedback loops is critical to both developing more polished, effective comics as well as cementing learning.

One approach to picking the “right’” amount of time for this activity may be to think about how complex the main learning objective is for the session and allot an amount of time to match it.For instance, an implementation of this activity which mainly aims to build students’ skills for finding information may require less time than versions that focus on the summarization and presentation aspects, because finding information is a less complex task than summarizing it (according to Bloom’s “Taxonomy of educational objectives”). In situations where the activity time is fixed, it may also be possible to incorporate pre- or post-activity work to support in-class activity time. For example, Boucher et al. had workshop participants identify a visualization to explain in a data comics before beginning their workshop session and Wang et al. asked participants to identify a dataset to use between the first and second session in their 3-session sequence.

Considering Existing Familiarity With & Orientation Toward Key Ideas

Second, while comics are enjoyed by a diverse group of people throughout the world, they are not universally understood. Instead, readers must learn how to decode the visual and linguistic conventions in comics, like any other form of narrative. One impact of this reality is that students who are less familiar with comics may face an extra barrier to their learning. Therefore, educators who are considering this activity should consider students’ existing familiarity with comics and allot additional time and practice to account for familiarization (e.g., by allotting additional time to analyze the format or work with existing comics prior to asking students to make their own).

In addition to comics, it is important to consider students’ familiarity and comfort with visualizations. As previously observed by Wang et al., it can sometimes be a challenge to get students to integrate visualizations into their comics, depending on their existing experiences with the topic. During this iteration of the activity, I observed that some, but not all, of the groups incorporated visualizations into their comics, though it is unclear whether this was because they were uncomfortable with using visualizations or just ran out of time (see Figure below for an example of one group’s use of a timeline and pair of pie charts to summarize the methods and results). Educators whose students are less familiar or comfortable with making and using visualizations may find tools like Boucher et al.’s “Comic Construction Kit” or Bach et al.’s data comic design patterns cards helpful to scaffold this challenge and re-direct students’ energy toward learning objectives.

Further, convincing students that creating comics is a worthwhile learning activity may be difficult depending on their existing orientations toward this kind of activity. While work in Educational Psychology has shown that creative activities like drawing can be beneficial to learning in STEM fields, students may not view it this way, depending on their existing beliefs about these activities. For instance, while my students were enthusiastic about creating comics as a component of a Computer Science course, my institution is a liberal-arts college which highly emphasizes interdisciplinarity and takes a pretty broad view of what Computer Science is and how it can be taught. However, educators at institutions which take a more traditional view of what the “work” of their field is, the acceptable pedagogies used to teach it, or which abide by a stronger science/art divide may need to do additional work in order to get student buy-in.

The figure shows two panels of a data comic. The first panel shows a methodological timeline which maps the steps the researchers took from data set to the final user survey including selecting the recommendation algorithm, recruiting participants, pre-task questionnaire, and use of the interface. The second panel has 2 pie charts which show higher satisfaction with algorithm 1 across two user groups.
Some, but not all, groups incorporated traditional visualizations into their data comic. This is an example of the visualizations one group used to summarize the steps in the methodology and some of the results of Noh et al.’s “A Study on User Perception and Experience Differences in Recommendation Results by Domain Expertise: The Case of Fashion Domains.”

Selecting the Right Comic Format for Your Paper Type

Third, while it may be possible to create a data comic for any academic paper, the 3-part format described in this article may need to be modified for papers without experimental studies. When I initially designed this activity, I knew that students would be working with papers containing human-subject studies because we had covered related methods earlier in the course. Therefore, the 3-part Question/Methodology/Result narrative structure used by my students was picked with these kinds of papers in mind. However, these three sections may not meaningfully encapsulate other types of papers which do not use experiments as the basis of their claims (e.g., theoretical or position papers). Educators who want students to create comics for non-study papers should consider their main components and select a structure with that in mind. For instance, data comics for theoretical or position papers could instead map out the steps or pillars of the argument being made and how they relate to each other.

Conclusion

In conclusion, creating a data comic based on an existing research paper may be an effective learning activity because it forces students to practice both finding pieces of information of interest within the paper’s unfamiliar structure and digest the information they find in order to transform it into a new form — two stumbling blocks for those new to reading academic papers. I am planning to bring similar activities to my other courses and I hope that this article inspires other educators to bring data comic creation activities into their work as well.

The post Learning to Read Academic Papers by Making Data Comics appeared first on Nightingale.

]]>
24195
Scrollytelling with Closeread: The Super Low-Code Way to Bring Your Data Project to the Web! https://nightingaledvs.com/scrollytelling-with-closeread/ Thu, 22 May 2025 14:39:00 +0000 https://dvsnightingstg.wpenginepowered.com/?p=23584 Introduction What is Scrollytelling? Scrollytelling is a dynamic, interactive storytelling technique often used in web-based formats, that reveals insights, visuals, and narrative elements as the..

The post Scrollytelling with Closeread: The Super Low-Code Way to Bring Your Data Project to the Web! appeared first on Nightingale.

]]>
Introduction

What is Scrollytelling?

Scrollytelling is a dynamic, interactive storytelling technique often used in web-based formats, that reveals insights, visuals, and narrative elements as the user scrolls down the page. It allows data stories to unfold gradually, guiding the reader through a structured narrative in a way that feels both natural and engaging.

Why Scrollytelling Is Effective for Data Communication

Scrollytelling is a powerful way to communicate data because it helps reduce information overload, boosts user engagement, and makes insights easier to digest. Rather than overwhelming users with dense dashboards or complex visuals all at once, it guides them through your story step by step—just by scrolling.

Scrollytelling is not a replacement for other presentation methods such as dashboards and static pdf reports. Instead, it works best when there’s a need to communicate stories or data insights to a broad audience with varying levels of data literacy. It allows you to wrap each insight in meaningful context and empowers you to control the pacing and structure of your narrative while keeping readers engaged through suspense and sequential reveals. This results in a smoother, more intuitive experience, especially for readers who need guidance or are less data-savvy. This level of engagement is often difficult to achieve with other traditional methods of presentation. As a dynamic and versatile technique, scrollytelling supports various content formats such as text, charts, maps, GIFs, images and more.

The Challenge

Despite its many advantages, scrollytelling has traditionally required web development skills—something many dataviz professionals don’t typically have. In the past, even large media houses with dedicated teams would spend significant time and effort building a scrollytelling project. The tradeoffs were high, making it a less viable option for time-sensitive or resource-constrained projects.

For smaller teams or solo practitioners, this barrier has often made web-based storytelling feel out of reach. But that changes today. Thanks to the many developer communities, the barriers have been so significantly lowered that you can put up your scrollytelling project in a few hours, many of the times, without even needing to code!

What You’ll Learn in This Tutorial

By the end of this tutorial, you’ll be able to build and deploy a fully functional scrollytelling project that takes your insights beyond dashboards and onto the web! Specifically, you’ll be able to:

  • Set up your environment and craft your data story using the scrollytelling technique
  • Build your project locally and deploy it to the web for free using GitHub and Vercel (or any other deployment platform that supports dynamic webpages)

Don’t worry—we’ll walk through everything step by step, from scratch. Whether you’re an absolute beginner or just looking to sharpen your skills, this tutorial will help you build your first scrollytelling project from the ground up!

One more thing: This tutorial is designed to be hands-on, so as you follow along, feel free to copy each line of code and paste it into your Closeread document to see it in action.

Tools we’ll be using

For this project, we’ll use the Closeread extension to create our data scrollytelling experience. Closeread is a Quarto extension designed specifically for building interactive, scroll-based narratives. To use Closeread, you’ll need two key tools:

  1. Quarto: an open-source publishing system that supports Python, R, Julia, and ObservableJS. It allows you to create dynamic, multi-format documents using Markdown, Jupyter Notebooks, or your preferred editor. Since Closeread is built on top of Quarto, installing Quarto is a necessary first step.
  2. A Code Editor: This is where you’ll write and manage your project files. We’ll be using Visual Studio Code (VS Code) in this tutorial, but feel free to use alternatives like RStudio, Atom, or any editor that supports Quarto projects.

To get started, install the Quarto command line tool from the official Quarto website. Follow the standard installation process for your operating system. Since I’m using Windows, I downloaded it as shown below.

Downloading the Quarto installer from the official website.

We’ll also be using GitHub for version control and Vercel to deploy the final project to the web.

Once you’ve installed Quarto, you’re ready to install the Closeread extension. We’ll cover that in the next section.

Set up your project environment

Step 1: Set Up Your Project Directory

Start by creating a folder named closeread_tutorial. You can place this folder anywhere you’d like your project to live. Personally, I prefer to keep it on my Desktop, so my directory structure looks like this:

C:\Users\USER\Desktop\closeread_tutorial

Next, open a terminal and navigate to the folder you just created. An easy way to do this is by copying the full path to the folder.

If you’re on Windows, press Windows Key + R, type cmd, and hit Enter to open the Command Prompt.

Then, run the following command (update the path to match your own folder location if different):

cd "C:\Users\USER\Desktop\closeread_tutorial"

This sets your working directory to the project folder. You can confirm it’s successful by checking that the command prompt now matches the folder path you copied earlier.

Command prompt confirming that the working directory is now set to the Closeread project folder.

Install the Closeread Extension

To install the Closeread extension, run the following command in your command prompt:

quarto add qmd-lab/closeread

Make sure you’re connected to the internet, as this command will fetch the extension from an online repository. You may receive a few prompts asking whether Quarto extensions should be allowed to execute code during document rendering. Simply type Yes for each prompt to proceed with the installation.

Your command prompt should now look similar to this:

Command prompt showing that the Closeread extension was successfully installed in the project folder.

The message highlighted in red confirms that Closeread has been successfully installed. You can also verify this by refreshing your project folder. You’ll notice that a new folder named _extensions has been added to it.

Congratulations! 🎉 You’re now all set to create your first Closeread project.

Let’s dive in!

Create a basic Closeread project

Now, inside your project folder, create a new file named index.qmd. Open the file in your code editor and paste the following lines of code:

---
title: My First Closeread
format: closeread-html

---

Hello World! Please read my Closeread story below.

:::{.cr-section}

Closeread enables scrollytelling.

Draw your reader's attention with focus effects. @cr-features

:::{#cr-features}
1. Highlighting  
2. Zooming  
3. Panning  
:::

:::

You’ve just created your first Quarto document! 🎉

Now, let’s render and preview it to see your Closeread project in action. Go to your terminal and run this quarto command:

quarto render index.qmd

This should render your project.

After rendering, you’ll notice that new file and folder have been added to your project directory:

  • A folder containing the necessary libraries and assets used by your Closeread project.
  • An HTML file generated from your base Quarto document, which serves as the interactive output.

These confirm that your project has successfully compiled and is ready for further development.

To preview the project you just created, open the index.html file in your browser—and voila! Your first Closeread project is live!

We will dedicate the next section to understanding the building blocks of a Closeread project. Let’s ride on 🔥

Understand the building blocks of Closeread

A Closeread project is built as a section within a Quarto document, defined using fenced divs. At its core, a Closeread section consists of three main components: Section, Sticky, and Trigger.

1. Section

A Closeread section is created using opening and closing fenced divs with the .cr-section class. This defines the scrollytelling block.

Here’s what the simplest Closeread section looks like:

:::{.cr-section}
This is a Closeread section
:::

This section can be enhanced with stickies (content that remains fixed while the user scrolls) and triggers (content that activates the sticky as the user scrolls). We’ll explore those next.

💡 Pro Tip: If you wrap your entire Quarto document in a fenced div with the .cr-section class, the whole thing becomes a Closeread document. 😉 This means everything in your document becomes part of the scrollytelling experience—great for fully immersive data stories!

2. Stickies

A sticky is an element within a Closeread section. It could be a block of text, an image, a video, or any element that can be rendered in the browser. It’s the element you want to perform closeread on. This means you can set it to stick to the screen as the reader scrolls through the page.

Stickies can also be made invisible by default, and only appear when the viewer scrolls to the point where the trigger is activated.To declare an element as a sticky, wrap it within a fenced div and assign it an identifier prefixed with cr-, as shown below:

:::{#cr-identifier}
This block of text is a sticky!
:::

Since the sticky must be enclosed within a section, the full code would look like this:

:::{.cr-section}
This is a Closeread section

:::{#cr-identifier}
This block of text is a sticky within the Closeread section!
:::

:::

3. Triggers

As you might already know, a trigger is the element that activates a sticky in a Closeread document.

Remember the cr-identifier we assigned to the sticky above? The one prefixed with cr-? That’s the element we’ll use to trigger the sticky.

Here’s how triggering works:

  • Identify the point in your document where you want the sticky to be activated.
  • At that point, reference the sticky’s identifier—just replace the cr- prefix with @.

So, cr-identifier becomes @identifier.

Let’s update our full code to include a trigger:

:::{.cr-section}
This is a Closeread section

I want my sticky to appear here ➡ @cr-identifier

:::{#cr-identifier}
This block of text is a sticky within the Closeread section!
:::

:::

Simple, right? When a reader scrolls to the trigger (@identifier), the sticky pops into view!

Updated code:

Now, copy the updated code and paste it into your index.qmd file, replacing everything after the line that says:

Hello World! Please read my Closeread story below.

Your document should look like this:

---
title: My First Closeread
format: closeread-html

---

Hello World! Please read my Closeread story below.

:::{.cr-section}
This is a Closeread section

I want my sticky to appear here @identifier

:::{#cr-identifier}
This block of text is a sticky within the Closeread section!
:::

:::

Re-render the project:

Once you’ve updated your index.qmd file with the new code, open your terminal and run the quarto render command just like before:

quarto render index.qmd

Refresh your browser tab to see the updated Closeread project. As you scroll, your sticky will appear at the specified trigger point!

The updated Closeread project displayed live in the browser.

Celebrate this milestone so far. If yours is not as shown in the screenshot below, take some time to review your code and ensure that it is similar to the one above.

Adding styling and interactivity to your Closeread document

Closeread offers several options for styling your project—ranging from prebuilt effects to full-fledged themes. What’s more, you can even extend your project’s styling using an external CSS stylesheet. The Closeread styling documentation provides a detailed guide on how to style your document. You can declare the styling template in the YAML configuration section of your document. For this project, let’s apply some of these techniques to further customize our document, starting with the basics: focus effects.

Focus Effects

Focus effects are prebuilt functions within Closeread that add interactivity and dynamism to your Closeread projects. As described in the Closeread documentation, these features “guide your readers’ attention to aspects of your stickies.” A summary of these effects is provided in the table below:

EffectDescriptionSyntax Example
ScalingMagnifies or reduces the size of an element by a given factor.scale-by=”3″: Triples the size of a sticky.
PanningMoves the view to a specified section of the sticky (e.g., top-left corner).pan-to”-30px,30px”: Pans 30 pixels left and 30 pixels down.
ZoomingEnlarges a specific portion of the context to focus the reader’s attentionzoom-to=”3″: Zooms into line 3.
HighlightingVisually emphasizes a span of text or a line by changing its style or color.highlight=”2-3″: Highlights lines 2 to 3.

Focus Effects in Action

The purpose of this section is to demonstrate some of these focus effects. The next few lines contain short narratives along with their corresponding Closeread commands. We’ll use some images and text blocks as stickies and apply these effects to them.

NOTE: I’ve taken a conversational approach to explain the purpose of each feature. This is to keep things engaging. But don’t forget, the narratives also form part of the text you’ll copy into your Closeread document!

Now, back to our updated code. Quickly read through the following lines to get a sense of the flow. Afterwards, download these two images: grid.jpg and grid-highlighted.jpg. Create a folder named images directly inside your main project folder (where your index.qmd file is), and paste the two images you just downloaded into this folder. Then, copy the code block below into your Closeread document to see the effects in action:

Below is another block of text we'll be working with: @cr-highlighted
First, let's scale this block of text by two:
Scale this block of text by two [@cr-highlighted]{scale-by="2"}

Next, we’ll highlight lines 2 and 3 while keeping the same scale:
Lines 2 and 3 are scaled and highlighted [@cr-highlighted]{scale-by="2" highlight="2-3"}

Now, let’s bring in an image:
Loads an image @cr-image

It’s a bit large at first as it takes up the full screen. Let’s scale it down:
Image has been scaled down [@cr-image]{scale-by="0.5"}

Finally, we’ll pan to the portion highlighted in red:
Pan the image to the section highlighted in red [@cr-image2]{pan-to="-75%,75%" scale-by="1.5"}

:::{#cr-highlighted}
| 1⃣ This is the first line.
| 2⃣ This is the second line.
| 3⃣ This is the third line.
| 4⃣ And this is the fourth line.
:::

:::{#cr-image}
![](images/grid.jpg)
:::

:::{#cr-image2}
![](images/grid-highlighted.jpg)
:::

💡Pro tip: When you pan and scale at the same time, you end up zooming! (pun intended 😉)

Note: Panning can be a bit unintuitive at first. You might need to experiment with the position values to get the result you want. A bit of trial and error helps here.

Applying Additional Styling

Up to this point, the YAML configuration section of our project looks like this:

---
title: My First Closeread
format: closeread-html

---

Update it to apply the following styling:

---
title: "Understanding Tree Diagrams"
theme: "superhero"
fontsize: 16px
format: 
    closeread-html:
        cr-section:
            layout: "sidebar-left"
        cr-style:
            section-background-color: "#08508a"
            narrative-background-color-overlay: "#08508a"
            narrative-text-color-overlay: "#08508a"
            narrative-border-radius: 5px
            narrative-overlay-max-width: 60%

---

What we just did: Modified the YAML configuration to include some additional styling, such as:

  • Setting the layout to sidebar-effect
  • Defining the background color under cr-style
  • Setting the theme to super-hero
  • Adjusting the font size, and more

Each of these would have required more complex CSS code, but Closeread simplifies the process—you can simply call a named section and apply the style directly.

Applying custom CSS

If you’d like to further customize your Closeread project using an external .css stylesheet, you can follow the standard approach used in regular web development: by assigning styles directly to elements. All you need to do is link your Closeread document to the external stylesheet—and you’ll do this in the YAML section of the document (the part enclosed by triple dashes).

In this example, let’s change the color of the text in the narrative section of our Closeread project. The narrative section is the part of your story that delivers the main content. By default, the text appears black on desktop. We want to change it to white.

Steps:

  • Within the root of your project directory, create a new empty file and name it style.css.

Paste the following lines of code into the file and save it:

.narrative {
color: white;
}
  • Next, reference the external CSS file in your Closeread document. You can do this by navigating to the YAML configuration section of your document and pasting the following line:
css: style.css

Your YAML section should now look like this:

---
title: "Understanding Tree Diagrams"
theme: "superhero"
fontsize: 16px
format: 
    closeread-html:
        cr-section:
            layout: "sidebar-left"
        cr-style:
            section-background-color: "#08508a"
            narrative-background-color-overlay: "#08508a"
            narrative-text-color-overlay: "#08508a"
            narrative-border-radius: 5px
            narrative-overlay-max-width: 60%
css: style.css

---

Take note of the indentation!

Publish and deploy

You’ve made it this far—well done! You’ve built your first Closeread project. But a project this good shouldn’t live only on your computer. It’s time to publish it to the web and share it with the world!You’ll use GitHub to store your project online, and Vercel to host and publish it for free.

Step 1: Create Your GitHub & Vercel Accounts

  • Go to github.com → Click Sign Up and follow the steps to create your account.
  • Then, visit vercel.com → Click Start for Free and sign up using your GitHub account. This allows Vercel to access your repositories for deployment.

Step 2: Upload Your Project to GitHub (No Code Required)

  1. On GitHub, click the + icon at the top-right → Select “New repository”.
  2. Give your repository a name like closeread-project, and click Create repository.
  3. On the next page, click “Uploading an existing file”.
  4. Locate your Closeread project folder on your computer.
  5. Drag and drop everything inside the project folder into the GitHub upload area.
  6. Scroll down, add a commit message like Initial upload, and click Commit changes.

Great! Your web story is now on GitHub.

Step 3: Deploy with Vercel

  1. On the Vercel dashboard, click “Add New” > “Project”.
  2. You’ll be prompted to choose your preferred Git provider. Select Continue with GitHub
  3. You’ll see a list of your GitHub repositories. Select the one you just uploaded.
  4. Configure your settings:
    • Framework Preset: Choose Other or Static Site
    • Output Directory: leave the default option (root)
  5. Click Deploy.

Vercel will build and deploy your project in seconds.

Step 4: View & Share Your Live Story

Once deployment is complete, you’ll get a live URL with which you can access your project live on the web. e.g https://closeread-tutorial.vercel.app/

Click the link to view your published Closeread story—fully interactive and hosted online!

Closeread project – live on the web!

Conclusion

If you’ve followed this tutorial up to this point, you should now be familiar with how to build a scrollytelling project from scratch using Closeread. You’ve learned the core building blocks of a Closeread project—such as sections, stickies, and triggers. You’ve also explored how to style your project using both built-in options and external CSS files. Finally, you now know how to host your project on GitHub and deploy it with Vercel, so your story can go live and be shared with the world.

This gives you a solid foundation for taking your data storytelling skills to the next level.

Up next is a second project I’ve included to give you a more hands-on experience. You’ll find a script and an image folder linked here. Your task is simple: follow the script, insert the appropriate images, and apply the relevant Closeread effects to bring the story to life—just like in the completed version here.

This practical exercise is designed to help you reinforce everything you’ve just learned and give you the space to experiment further with Closeread’s effects and features. Once you’re done, feel free to share your completed project with your network on social media—and don’t forget to tag me. I’d love to see what you come up with!


This project is also available on GitHub.

CategoriesHow To

The post Scrollytelling with Closeread: The Super Low-Code Way to Bring Your Data Project to the Web! appeared first on Nightingale.

]]>
23584
I Stopped Using Box Plots: The Aftermath https://nightingaledvs.com/i-stopped-using-box-plots-the-aftermath/ Tue, 28 Jan 2025 15:55:19 +0000 https://dvsnightingstg.wpenginepowered.com/?p=22843 I recently learned that my 2021 article about why I no longer use box plots is now the second-most-read article in Nightingale’s history🤯 (or, at..

The post I Stopped Using Box Plots: The Aftermath appeared first on Nightingale.

]]>
I recently learned that my 2021 article about why I no longer use box plots is now the second-most-read article in Nightingale’s history🤯 (or, at least, since Nightingale moved to its current hosting platform). What do you do when you have a hit on your hands? Milk it, baby, by writing a sequel 😎

When that article came out, I got a lot of comments and replies. Like, a lot a lot. Like, I spent three days responding to them. There were all sorts of comments, of course, but there were definitely common themes. This article summarizes the most common replies that I received, along with how I responded to each, making it very much a sequel to the original article, just with several hundred new coauthors. Well, uncredited coauthors🤷

The majority of the replies that I received expressed some form of agreement, with chart creators thanking me for helping them understand why their box plots flopped with audiences or for making them aware of alternatives like strip plots and distribution heatmaps. You’re welcome!

There were, however, also plenty of thoughtful objections and counterarguments, and I’ll be focusing on those because reading about people agreeing with one another is pleasant and boring.

Alrighty, then. First up is…

“This [example box plot] is useful! I can clearly see [insight, insight, insight, etc.]!”

I wasn’t suggesting that box plots aren’t useful. Obviously, they can show useful insights. I was suggesting that simpler chart types like strip plots and distribution heatmaps can show all the same insights that box plots can, but are easier to understand, less prone to misinterpretation, and don’t hide potentially important information. I wasn’t claiming that box plots are useless, just that, when compared with other distribution chart types, box plots have some significant disadvantages and no identifiable advantages, so it might make sense to use other chart types instead.

To dispute the claim that I was making, then, you’d need to show the same dataset as a box plot, strip plot and distribution heatmap, and then identify specific insights that are clearer in the box plot than in those simpler chart types. Many people did send me box plots, but most didn’t include strip plots or distribution heatmaps of the same data. This made it difficult or impossible to see if the insights that they pointed out in their box plot would have been just as clear in those simpler chart types. None of these responses, then, actually addressed the claim that I was making.

Some people did step up, however, such as Sergio Garcia Mora, who showed the same dataset in a variety of chart types in this fantastic article:

A box plot compares salary distributions for HR roles in Argentina by gender. Male employees generally have higher medians and larger ranges than female employees across roles such as Analyst, HRBP, and Manager. Purple and teal differentiate genders.
A scatter plot displays salary distributions for HR professionals in Argentina by gender and role, with distinct median lines for male and female employees for roles like Analyst, HRBP, Supervisor, Head, and Manager. Purple represents female employees, and teal represents male employees.

This is what Sergio wrote about the box plot version:

“What I like about this visualization is that we can see the distribution of the salaries by the size of the halves of the boxes. Let’s take for instance the Head position. The medians are similar, but in the case of women, the bottom half of the box is larger, so that means that the range of salaries for women is broader. That tells us that there are women in Head position with salaries far below the median.

The opposite happens with male professionals in the Head position. The top half of the box is larger meaning that there are men in the Head position with salaries far above the median.”

To my eye, anyway, all of these insights are at least as clear in the jittered strip plot version. Plus, I could see several insights in the strip plot that weren’t visible in the box plot, such as the fact that there are fewer employees in the more senior roles, that no Managers make between about AR$85K and AR$110K, etc.

There might be box plots out there that show insights that aren’t as clear in simpler chart types, but I have yet to come across a single one. If you have one, send it to me! (Just make sure to include a well-designed strip plot and distribution heatmap showing the same data, s’il vous plait.)

“Box plots are useful because they show quartiles.”

Quartiles aren’t insights, they’re just features of charts that allow readers to spot actual insights like, “The salaries in Company A are more dispersed than the salaries in Company B, which suggests that there’s more room to move up in Company A.” That’s an insight, and you almost never need quartiles to spot those.

Saying that “box plots are a useful way to show quartiles” is like saying that “distribution heatmaps are a useful way to show the bins/intervals that the values fall into.” These aren’t insights, they’re chart features that allow readers to spot insights. What ultimately matters is how clearly each chart type shows insights, not the specific mechanisms that are used to make those insights clear.

Having said that, there are rare cases when quartiles have some special meaning. For example, maybe a company has decided to lay off the middle 50% of its employees based on salaries (which would be weird but, like I said, these are rare cases). Even in a scenario like that, though, interquartile ranges (i.e., the middle 50% of values) could be shown in strip plots and distribution heatmaps, which would still be easier to read and clearer than box plots:

Side-by-side visualizations highlight age distribution by group, with scatter plots overlaid on box plots on the left and a heatmap representing interquartile ranges in yellow on the right.

Like I said, though, it would be very rare to have to do this in practice because, in the vast majority of charts, quartiles (or quintiles, terciles, etc.) have no special meaning and aren’t needed in order to spot useful insights.

“Box plots make outliers easy to spot.”

That’s true, but outliers are just as easy to spot in simpler chart types. For example, in the “salaries by role” jittered strip plot that I showed earlier, the outliers are pretty obvious—they’re the dots that are far away from the main cluster of dots. You could make outliers in a strip plot even more obvious by highlighting those dots but this seems unnecessary; their location away from the other dots already identifies them as outliers.

Outliers can also be added to distribution heatmaps, similar to how they’re added to box plots:

A heatmap of age distribution by group categorizes individuals into age ranges, highlighting the percentage of total members per group in varying shades of blue. Outliers appear as distinct circles.

“Box plots work well when there are many distributions to show because they look less visually busy.”

Some people sent me box plots with many sets of values, like the one below, arguing that other chart types would be even busier looking:

A box plot compares employee salaries across 12 companies, showing varying medians and ranges, with some distributions skewed and a few outliers evident.

It’s true that strip plots can look quite busy when there are many sets of values in a chart, but distribution heatmaps are well-suited to these situations:

Personally, I find that the graphics in a distribution heatmap actually are less visually busy than boxes and whiskers, but this is probably subjective.

“Why not combine box plots and strip plots to get the best of both worlds?”

Some people suggested combining strip plots and box plots, like this:

A vertical box plot shows age distributions for three groups labeled A, B, and C. Each plot includes individual data points, medians, and ranges, with Group A showing the widest spread.

Yes, you could do this, but the question then becomes: which specific insights are the boxes making clear that wouldn’t have been clear in the strip plot on its own—perhaps with the medians added, since they’re often relevant? I can’t see any such insights, so the boxes just add complexity without adding any value, IMHO. Basically, I don’t think this is a “best of both worlds” solution because there’s no “second world” in this case, i.e. insights that box plots would show that wouldn’t already be clear in strip plots.

“Sure, box plots don’t work well with multimodal distributions, but they shouldn’t be used to show data like that in the first place.”

A number of people objected to this graphic from the 2021 article:

Two side-by-side plots contrast age distributions for a test and control group. The box plots suggest similarity in medians and spreads, while the scatter plot reveals distinct clustering patterns within each group.

They objected that this wasn’t a valid use case for a box plot because box plots should only be used with unimodal (“bell-shaped”) distributions, not multimodal (“clumpy”) distributions, such as the “Control group” in the jittered strip plot above.

The problem with this objection is that it assumes that readers can always be certain that no chart creators will ever use box plots to show multimodal distributions. If you see a box plot in the wild, though, how can you be certain that the person who created it didn’t decide to use a box plot even though the data contained multimodal distributions? And what about box plots that are dynamically generated based on live data, and in which the distributions might be unimodal on some days and multimodal on others?

Basically, with box plots, readers are always left wondering if the distributions in the chart are unimodal or not—assuming that they’re even aware of this problem in the first place. Chart types like strip plots and distribution heatmaps, however, show unimodal and multimodal distributions clearly and so avoid this problem altogether.

“Box plots are a better choice for more data-savvy audiences.”

Even for audiences that are extremely statistically literate and very used to reading box plots, I’m not sure what benefit box plots would offer that wouldn’t also be offered by simpler chart types (sounding like a broken record now, I know). I am, however, pretty sure that box plots would hide potentially important information from them (gaps, clusters, etc.).

“We shouldn’t be afraid to use chart types that audiences aren’t familiar with. / We should try to teach audiences to read more advanced chart types.”

Totally agree. Indeed, in my Practical Charts course, I cover chart types that many audiences aren’t familiar with, such as step charts and scatterplots (see this article for a more complete list of “basic” chart types that many audiences aren’t familiar with). I cover these potentially unfamiliar chart types in my course because there are certain types of data and certain types of insights that can’t be communicated using simpler, more familiar chart types and so, sometimes, more complex or unfamiliar chart types are unavoidable, and you might need to teach the audience how to read them.

If you’re going to ask an audience to spend their valuable time and brain cells on learning a new chart type, though, there’d better be an “epiphany payoff,” as data storytelling expert Brent Dykes would call it, to justify that effort. I’ve just never seen any epiphany payoffs from box plots that couldn’t also be obtained with more familiar, less effortful chart types.

“There are no bad chart types. All chart types have situations in which they’re the best choice.”

I hear this all the time but I’m not sure why it would be true. It’s easy to forget that chart types are just human inventions, like printing presses and electric toothbrushes; they aren’t fundamental properties of the Universe, like mathematical principles. In fact, box plots are a relatively recent invention, having only been first proposed in the 1950s.

As with any other type of invention, there’s no rule that says that every type of chart needs to have situations in which it’s the best choice. Indeed, the pantheon of human inventions that were the best solution in exactly zero situations is well populated. I wrote more about this idea here.

Box plot defenders also virtually never mentioned one of the major problems that I described in the 2021 article, which is that box plots don’t make “visual sense.”

For example, have a look at the box plot below:

A horizontal box plot shows data spread from 10 to 90, with the interquartile range spanning from 25 to 75 and a median at 50. Whiskers extend to the minimum and maximum values without outliers.

Even to people who are fairly experienced with box plots, it looks like there’s a large cluster of values in the central part of this range.

If you deeply understand box plots and think about it long and hard enough, however, you’ll realize that this box plot shape actually must mean that there are few values in the central part of this distribution, and this data set would have to look something like the jittered strip plot below (which is showing the same data as the box plot above):

A horizontal scatter plot shows two clusters of data between 10–30 and 70–90, illustrating distinct distributions.

That’s really, really not what the box plot seemed to be showing, though, and there are many other situations in which even experienced box plot readers must “think around” these perceptual paradoxes in order to avoid misreading the chart. Yes, this gets a bit easier with practice, but why use a chart type that forces readers to perform these kinds of cognitive gymnastics when there are readily available alternatives that don’t?

So, did any of these exchanges change my opinion about box plots?

As you can probably guess, I still don’t think that box plots are ever a better choice than alternative chart types, however, that’s now a much more thought-through opinion because people took the time to challenge it with such thought-provoking arguments, and I’m extremely grateful to everyone who chimed in. I remain open to being proven wrong and welcome additional comments and examples, just be sure to include a strip plot and distribution heatmap of the same data. To reply, comment on the post of this article on LinkedIn or Bluesky, or reach out to me via this contact form.

If you still feel that box plots have their place and you’ll continue to use them, that’s totally kosher. I certainly won’t call out anyone for using them, and all of this is just my opinion, of course. I would, however, still urge you to consider alternative chart types for one more reason that I haven’t mentioned yet…

Unfortunately, I’ve seen plenty of people feel needlessly stupid because they found it so difficult to read box plots, or failed to grasp them entirely. Unless you’re certain that all of your readers already understand box plots, avoiding making people feel dumb for no reason might be the best argument of all to consider alternative chart types instead.

The post I Stopped Using Box Plots: The Aftermath appeared first on Nightingale.

]]>
22843
No Pain (Points), No (Design) Gain: Strengthen Feedback by Making It About Their Needs https://nightingaledvs.com/no-pain-points-no-design-gain/ Mon, 30 Dec 2024 16:51:19 +0000 https://dvsnightingstg.wpenginepowered.com/?p=22742 Feedback elevates work, but only when it is done right. If the feedback is personal—or solely about potential solutions—it can stand in the way of..

The post No Pain (Points), No (Design) Gain: Strengthen Feedback by Making It About Their Needs appeared first on Nightingale.

]]>
Feedback elevates work, but only when it is done right. If the feedback is personal—or solely about potential solutions—it can stand in the way of innovation.

For creative endeavors that produce abstract products, like data visualizations, figuring out a client’s or teammate’s feedback is often the difference between success and failure. Let us start exploring this concept of feedback with an all too common, yet completely imaginary, suggestion. “This part needs to be red.” The imaginary feedback might sound similar to designers. Most creators have experienced a client or teammate set in the presence of a design component which they see as their only solution. This sort of feedback stifles the creation of innovative options, because it does not center on a pain point.

Pain points are meant to be the superpower of any person providing feedback! The feedback provider hopefully knows why things are not working better than anyone else. Certainly better than the designer would, if they were an outside consultant. In many cases the pain point is why the client called you in the first place—their data was not working well enough for them. The client should provide the most feedback possible in as much detail as possible. They also get the final say on efficacy; whether they believe the solution addresses the pain point.

The problem for clients, and I completely understand this, is that it is difficult to disclose your needs and problems. The disclosure experience is one of vulnerability requiring trust in a person you do not know well. Ask anyone in therapy how good and easy this disclosure process feels. Furthermore, it is possible that the client reached out after they felt the effects of the pain point and not after they understood that pain point. In this ‘early call’ situation, the client would not even fully know what is wrong. I mean, would you wait to see a doctor until you understood what was happening to you, if something demonstrably bad occurred? Of course not. Given these circumstances, I, too, might find it easier to ask for a red component and lean into people when they do not create one.

This is why, as designers, it is important we help our client and teams uncover the underlying pain point often hidden in the feedback. While feedback meetings feel scary, they do provide an opportunity to uncover pain points and reframe requests in those terms. It is critical that we apply all the respect we can muster to these meetings. The clients hired us because we are gifted at this work, by definition of hiring us they are stating a need for help. Further, design mastery by itself does not support a successful business—respect for others does. If we overly rely on our designs, eventually work internal and/or external may dry up. No one wants to work with a judgy know-it-all.

What might underlying pain points look like in our imaginary feedback? Maybe this release is during the holiday season and the pain point is that people feel the website is not organically responding to popular culture. Or the pain point is overuse and they want to discourage folks from using the feature in question. It might be that the pain point is their materials don’t feel associated or consistent and red is a brand color.

Each of those pain point examples felt plausible as to why someone might “see red” as the only answer—cultural ties, overuse, and disconnection. Yet, each one of those pain points generates more options to choose from than “red.” If we can help the client see their feedback in those terms, the terms of pain points, we can generate for them so many more solutions. I am absolutely sure a UX expert can generate 25+ solutions in fifteen minutes on how to curb overuse, for example.

I find that this concept plays out well with an analogy about a doctor and patient’s interaction. An analogy expertly explored in Jordan Marrow’s book Be Data Analytical. The physician knows little of why the patient is in their office on any given day and will not know information unless it is provided by the patient. Hurts when you laugh? Mention it. Pass out when you sneeze? Mention it. Not slept well in weeks? Mention it. Each one of these items helps the doctor do their job—diagnose and treat. In fact, it would be silly to imagine a doctor telling you how to feel and what your pain points were. Conversely, the patient is not a medical expert. As such, the patient would be better served by deferring to the medical provider in generating possible solutions. Imagine a patient saying, “I need surgery,” and accepting nothing else. They may be correct, but how many more options could they generate together?

The post No Pain (Points), No (Design) Gain: Strengthen Feedback by Making It About Their Needs appeared first on Nightingale.

]]>
22742
Core Dataviz Style Guide Components https://nightingaledvs.com/core-dataviz-style-guide-components/ Thu, 12 Dec 2024 01:46:21 +0000 https://dvsnightingstg.wpenginepowered.com/?p=22578 Throughout my career, I’ve oscillated between working solo on data visualization projects and being a part of larger teams tasked with creating compelling visual stories..

The post Core Dataviz Style Guide Components appeared first on Nightingale.

]]>
Throughout my career, I’ve oscillated between working solo on data visualization projects and being a part of larger teams tasked with creating compelling visual stories from complex data. One consistent challenge, whether working alone or in a team, has been maintaining a blend of consistency and efficiency in our visualization practices. In the early days, I found myself reinventing the wheel with each project, trying to replicate the success of past visualizations without a clear roadmap. This not only diluted our brand identity but also turned into a significant time sink.

The goal of this article is to navigate you through the process of establishing a foundational data visualization style guide. When you see a multi-page style guide, you might get overwhelmed and think you need to make everything all at once. Data visualisation style guides, however,  are constantly evolving and highly modular. The best advice is to start small, develop guidelines that make the biggest impact for the least amount of energy, and expand from there. By focusing on key elements such as accessibility, colors, typography, and chart layout, I aim to provide a blueprint that ensures consistency across your visualizations while enhancing their impact and accessibility.

Four steps to building a data visualization style guide

Step 1: Determine if You need a data visualization style guide

Although I am a big proponent of data visualisation style guides, it is not necessary for everyone or every organization. If consistency is not critical (for example, in cases where each data visualisation is highly unique or customized, or if you work in highly specialised scientific research or a small team working closely together), or if you rarely produce data visualizations, you may not need a data visualization style guide. However, if your visualizations suffer from inconsistency, are inaccessible to part of your audience, or do require an inefficient amount of effort to produce, you might want to consider implementing a style guide. The benefits of a style guide extend beyond aesthetic uniformity; they enhance your workflow, ensure your visuals are accessible to a broader audience, and improve engagement by presenting data in a clear, consistent manner.

Step 2: Define the core components of your data visualization style guide

This step contains a series of exercises that will guide you into building the foundation of your data visualization style guide. To prepare, I suggest you use your favorite mind map tool, or even just a Google Doc, and paste every unique data visualization into it. If there are hundreds or thousands, choose the most recent and use your best judgment.

2.1: Accessibility

Accessibility in data visualization means ensuring that your visuals are comprehensible and usable by as many people as possible, including those with disabilities. While striving for 100% accessibility might seem daunting, there are practical steps you can take. Consider conducting an accessibility audit focusing on color contrast, text legibility, and whether your color palettes are friendly to those with color vision deficiencies.

This might not be a dedicated section, as accessibility should be integrated into the styles chosen for your style guide, but it is worth mentioning any considerations that might need to be made ad-hoc and to address the methodology gone into making the guide. 

2.2: Colors

Colors are pivotal in shaping the perception of information. A well-chosen palette not only enhances the aesthetic appeal of your visualizations but also boosts their clarity and comprehensibility. After a thorough color audit, consider leveraging tools such as Adobe Color or Leonardo to refine your palette. These tools help ensure your colors adhere to accessibility standards while remaining true to your brand’s visual identity.

However, perfecting your palette can become a significant investment of time—a fact I’ve learned through hands-on experience. A practical approach is to establish a ‘good enough’ base palette. Then, put it to the test: apply it to existing charts and solicit feedback from colleagues. There are various strategies to ponder when selecting colors for data visualization. For an in-depth understanding, refer to scholarly articles or papers that delve into data viz color usage.

Exercise: Color audit

  1. Catalog colors: List all the colors used across your visualizations. Tools like Adobe Color or desktop color picker apps can help identify specific color codes.
  2. Assess usage: Evaluate how each color is used. Which are for backgrounds? Which represent data? Do you need an extended categorical color palette with 12 colors, or would 5 suffice? Note any patterns or inconsistencies.
  3. Check for contrast and accessibility: Use tools like WebAIM’s Contrast Checker to ensure colors meet accessibility standards, particularly regarding contrast ratios.
  4. Identify overlaps and gaps: Look for colors that serve similar purposes and can be consolidated, and identify if there are gaps in your palette that need new colors.
  5. Refine palette: Based on your analysis, refine your color palette to ensure it’s both visually appealing and functional for data representation. Consider starting with brand colors and tweaking or selecting a few as base colors in order to insure brand identity. For more guidance regarding colors, refer to resources at the end of the article.

2.3: Typography

Typography in data visualization influences readability and the viewer’s ability to quickly grasp the presented information. An audit of your current use of fonts, weights, and sizes across your charts can reveal much about your existing practices. Simplifying and standardizing these elements can significantly enhance your visualizations’ clarity and impact.

Exercise: Typography audit

  1. List typefaces: Record all the typefaces used, including weights and styles (e.g., Regular, Bold, Italic).
  2. Analyze use cases: Determine how typography is used across different elements (titles, axis labels, annotations) and identify any inconsistencies.
  3. Standardize typography: Choose a set of fonts that work well together and define specific use cases for each, aiming for a balance between variety and cohesion.
  4. Create a reference sheet: Compile your findings and decisions into a reference sheet as part of your style guide.

How many type combinations are used in your charts? What sizes, font families, weights, styles and in what contexts? These will need to be simplified. Next, assign font family, style and weight for each function: 

  • Header
  • Subtitle
  • Axes
  • Direct labels
  • Source and notes (if applicable)

2.4: Chart layout

The types of charts you choose to represent your data can dramatically affect its comprehensibility. A focused audit of your most commonly used visualizations will help you understand which chart types best convey your data’s story and the charts your organization commonly uses. Standardizing these choices can improve your audience’s ability to quickly understand your data.

Exercise: Putting it all together

Now that you have your colors and typographical hierarchy, it’s time to put them together. Start by choosing a frequently-used chart.

Step 3: Implement Your data visualization style guide

With the core components defined, the next step is implementation. Expanding the style guide to include detailed examples and templates can facilitate its adoption. Providing clear guidelines and training can help ensure that everyone in your team or organization can follow the style guide effectively.

Exercise: Create a one-page guide out of the elements that you laid out in the previous steps using your favorite tool (if you don’t have one, I suggest Figma). Start by using the guide yourself, and once it passes your test, give it to a colleague to follow. Make sure to ask for feedback and view the visualization that was produced. Does it follow the guidelines as you intended? Make adjustments from there.

Step 4: Maintain and evolve your style guide

A style guide is not a static document but a living one that should evolve with your organization’s needs and the data visualization field’s best practices. Regularly review and update the guide, incorporating feedback from users and new insights from ongoing projects.

Conclusion

Taking on the task of establishing a data visualization style guide may seem ambitious at the outset. However, by starting with the core components of accessibility, colors, typography, and chart layout, you can lay a solid foundation allowing for expansion. This guide is your first step toward achieving consistency, efficiency, and accessibility in your data visualizations, ensuring your visual stories resonate more profoundly with your audience.

Additional resources

For those interested in delving deeper into the topics of data visualization standards, color theory, accessibility, and chart selection, a wealth of resources is available. Websites such as the Data Visualization Society, and their #style-guides slack channel, books like Storytelling with Data by Cole Nussbaumer Knaflic, and tools like Tableau Public offer great starting points for exploration and inspiration.

Tools to help design color palettes: 

Typography:

General resources:

The post Core Dataviz Style Guide Components appeared first on Nightingale.

]]>
22578
How to Share Memorable Data https://nightingaledvs.com/how-to-share-memorable-data/ Mon, 18 Nov 2024 23:17:31 +0000 https://dvsnightingstg.wpenginepowered.com/?p=22446 Integrating data visualization into non-profit work is both a strategic advantage and an easy pitfall. The core of non-profit work, for me, is communication. Listening..

The post How to Share Memorable Data appeared first on Nightingale.

]]>
Integrating data visualization into non-profit work is both a strategic advantage and an easy pitfall.

The core of non-profit work, for me, is communication. Listening to people in need, sharing their stories with the public, and coordinating change come back to how well an organization informs itself and others. The impact communication can have feels timeless, but how people connect is not. The growth of data, as well as social media, has impacted public relations. Earlier decades may have had less requirement for data in non-profit communications. In an age dominated by TED talks and tech pitch decks, sharing information memorably seems to be the norm. Which raises a question: how can a non-profit organization use information memorably?

Data visualization is an excellent tool to answer this because it’s the act of communicating information through image effectively. The proliferation of data visualization into everyday life indicates tables, charts, and icons have been widely adopted. Yet, being aware of data visualization as a practice and even attempting it, will not fully help a non-profit organization to use information memorably. Those non-profit organizations who use information memorably likely integrated data visualization into all their methods of communication.

In this article, we explore how The DeBruce Foundation integrated data visualization—research & reporting, social media, public tools, and internal reporting. You can even see this in our free upcoming research event set for November 19, 2024. We hope you will feel empowered to attempt an integration similar to the points outlined below. The first step in your journey may seem indirect. Before applying these practices, consider growing your data literacy, listening to your team’s needs, and solving pain points. As you remove obstacles to data visualization and build comfort with it, you will increase the ability to integrate these practices with your organization.

Two key findings on employment empowerment among working-age Americans, along with supporting data.
Examples of data analysis from our 2023 report, Start Early, Succeed Sooner: Insights from the 2023 Employment Empowerment Study. Our main goal in these designs was to demonstrate subgroups and their advantages. This is why we kept the graphics, colors limited and interconnected.

Research and reporting

Focus on narrative to avoid vague or confusing designs.

The DeBruce Foundation is dedicated to expanding pathways to economic growth and opportunity. Our dedication takes the form of helping individuals unlock their career potential through self-exploration with several free resources and training. Supporting our work are a series of reports and research on how individuals build careers. We even have a nationally representative longitudinal trend survey on the subjects of employment, income, and work conditions going back to 2020.

Not every non-profit organization is going to have research looking over thousands of interactions on a subject. However, almost all non-profit organizations will generate some form of thought leadership on their impact and the status of challenges they look to address. This writing may take the form of an annual report or a mission report. In any case, these reports are the best and expected place to integrate data visualization because most public reporting includes some amount of numerical data.

Our designs in this space tend to come in two styles. The above example represents narrative driven design. To be narrative driven means the information displayed has a clear insight that we have placed in context and communicate as a story. In our 2023 report, we felt it was important to stress the benefit of being ‘Employment Empowered’, a status the research attributes to high levels of ‘Career Literacy and Network Strength.’ With this insight in mind, we addressed the context through the beneficial outcomes of ‘Employment Empowerment.’ For example, people are more likely to be employed and have increases in annual earnings if they are ‘Employment Empowered.’ We then found ways to visually communicate that story. In this example we used large numbers in the accent color paired with smaller text.

What about that second style? Exploratory design. Not every written item generates an immediate insight that can be paired with contextual understanding to make a narrative. In developing writing about the status of your organization’s focus, you may need to generate a more open design. Be wary of using the lack of a narrative as an excuse for vague design. Designs that do not know what they are can often confuse. For us, exploratory design means having a much simpler message that allows space for the user to explore their own narratives. An excellent example of this would be the image in the very first part of this post—our narrative being, “that’s a lot.” All we are looking to do is show that we have a large amount of activity across the United States. We create space for the user by having an interactive data visualization, allowing them to click on various states and cities to get more information.

A social media post from The DeBruce Foundation promoting their "Opportunity Explorer" tool.
An example of integrating data visualization into our social media accounts. To show off the interactivity of the online tool this post includes an animated GIF from actual tool usage. (Source: Twitter)

Social media

Focus on evergreen media content for a longer development timeline.

It is easier, in some ways, to spend months meeting the broad visual needs of research. It is a different challenge to integrate data visualization into social media. Social media dynamically changes with the interests of those users engaged with it. Given the effort needed to generate a data visualization, this turnaround time might be too difficult; however, social media provides a diversity of subject matter.

Currently non-profit organizations seem to be expected to have a semi-consistent presence on social media. What defines semi-consistent is different for each organization and can vary greatly. For our consideration in this article, let us loosely define a semi-consistent presence on social media platforms as two online posts a week. This non-blog, social media posting rate would hypothetically yield between 52 – 104 posts a year. Not every one of those social media posts is about a unique activity, daily event, or is responding to the public. Some percentage of a non-profit organization’s social media posts are dedicated to ensuring your audience is reminded of issues important to you. This more consistent content is where integration with data visualization can come into play.

We have found success integrating data visualization into more evergreen, social media subject matter. By evergreen I mean topics we think will maintain their value outside of the moment they were crafted. The fact that the posts continue to be relevant in the future and will continue to be used, means you have time to collaborate with your communications team on using data visualization. Take the example above. This social media post is a part of a series introducing and reminding our audience about the Career Explorer Tools that appeared on many social media platforms even if we only show a few here. Here is one on research we funded and a few on a project we supportProX.

The best advice I can give is to strive for design clarity when presented with a much smaller canvas. Make sure each image’s focus is a single sentence. Our Pro-X work was a great example of this, the underlying research involved matched pairs of students that included research from our data science team. It would be easy to generate pages of analysis, but what did we try to do? Boil it down to key takeaways and then design those for social media. Here is an example: “The ProX experience elevated 63% of interns’ Career Literacy.” Straightforward and the visual of rising bubbles underscored that point.

A screenshot of the "Occupation Explorer" tool by The DeBruce Foundation. At the top, the title "Occupation Explorer" is displayed, with a description: "Your tool to explore occupations by looking at their #1 ranked Agility and estimated annual wage. Size and color of the words are based on the average, annual wage of the occupation."
An example of integrating data visualization into our online resources. The design cycle for this tool drew from 300+ user comments across a year. The emphasis is on displaying words and using the icons to change the words shown.

Public tools

Focus on user needs when making tools for their development.

The usage of data visualization in reports and social media can be less interactive. True, an exploratory design allows you to add space for the individual user to generate their own narrative while still making a broader point. Public facing tools, however, are different. These are data visualizations aimed at guiding a user through a process or experience to uncover insight.

Creating tools for the public requires a lot of background investments in your team and environment. The design has to meet the user’s data literacy level. Awareness of the user’s needs has to drive the tool’s design. Content approval needs to be clear and supported. Any one of these components can cause development hiccups. However, a lot of those costs are one time and will yield benefits across many public tools.

This is where, I believe, we really shine. We dedicate weekly time in our team meetings to discuss where our data comes from, where people can find it, and what they can do with it. We have specialized training on individual subjects spanning from best practices in making a PowerPoint slide to how to debrief after an event. We have even invested in making a streamlined content approval process that allows individual experts to focus their input while still leaving room for feedback. Public tools are always built on a strong team foundation. Let us assume that you are able to build all of these items. What does design look like then?

Our cycle for design of public tools can be found in this piece where I compare data visualization design processes almost a decade apart. The key component is that you need to focus the pain point to a single problem and then work diligently to manage the project. In the case of the above example for this section, we had heard challenges from our users. The feedback was that the 800+ occupations listed by the Bureau of Labor Statistics felt too large.

Coming back to our original point of self-exploration, it can be hard to motivate a user to take a journey into themselves if they feel that there are a million outcomes. Everything that we did from that point on became about how we can chunk this information while still displaying large, unique groups. The icons across the top allow for users to search by Agility, there is also a tool to search by annual salary, all the while generating a word cloud aimed at showing you the most possible occupations without overloading the user.

An internal report that combines a table, a jigger plot, and a series of large numbers.
An example of integrating data visualization into our daily reporting. Here is a favorite tool by both the designer and staff. The purpose of this item is to allow users to study the significance of one day’s activity. It does this by including numerical bins for various completion amounts and the percent of the population associated with them.

Internal reporting

Focus on teammates’ pain points when making reports for insight generation.

Insight and strong storytelling are key to design. But those concepts do not appear on their own. Team members performing analysis find insights while the team generates stories together. Internal reporting is unique as it supports this process setting the stage for all integrations. 

If you want to try an integration, consider this one. The reasoning is that problem solving builds bridges and these reports solve problems for your team. Further, in the non-profit space, problems can be seen daily. It is easy to connect with your team if you are helping them in their day-to-day challenges. The above image exemplifies this principle.

Our organization has many digital resources helping people internally explore with the goal of expanding their economic pathways. One of the most popular of those resources is our Agile Work Profiler © (AWP), a career assessment that ranks your Agilities© (skills universal to all occupations). As of writing, we have over 255,000 lifetime AWP completes that come from every U.S. state and territory as well as across the globe. The AWP helps people better understand themselves in the workplace and therefore better understand their opportunities. The daily counts of this assessment’s usage are an important measure for us and provide an opportunity to integrate data visualizations.

The pain point was around creating insight. We would get a total for a day, but would ask ourselves about that total’s significance. How do we know if it was a “good day?” This was particularly true for our leadership as they reviewed the efficacy of various events. The answer was to create sized-based categories for our daily totals that could be used to measure success of an individual day. In a way, we were creating a text-only histogram that grouped each day’s total completions. The result allowed us to know that any day with 100 or more responses was in our top 15% of all days. Further, we coupled those categories with a listing of our most popular days and rounded it out with an image that showed the amount of days as bubbles.

The design emphasis was on ease and creating sharable numbers. Thirty seconds with this report and we know 800+ AWP completions in a day is a banner day and anything in the 600-800 range is in the top 1% of all days historically. Not listed in this image is the amount of cross information found in the tooltip messages. Notice how this report is fuller and uses less white space than the norm. This is okay as the report is not normally one we look to share externally. When designing for internal consumption, err on the side of sharing more information than less. Obviously, there are exemptions to this rule, like board materials, but for designers, I think it is important to provide multiple pathways to insight.

Closing thoughts

Integration is a key to success as it speaks to the cross departmental use of data visualization. Unfortunately, there are no shortcuts to be found in this process. The development of solutions requires relationships, clear processes, and a fair amount of education. It is worth it, though, when you see your map in front of a governor or a tool gain over 80k hits. These mountain top experiences should refresh you as a designer so you can step back down into the valley and continue to build solutions that help others.

The post How to Share Memorable Data appeared first on Nightingale.

]]>
22446
How Can I Help People Find My Viz? https://nightingaledvs.com/how-can-i-help-people-find-my-viz/ Wed, 16 Oct 2024 16:01:35 +0000 https://dvsnightingstg.wpenginepowered.com/?p=22240 How a person receives a product is as important as the product itself, with truck drivers being a great way in understanding this point. In..

The post How Can I Help People Find My Viz? appeared first on Nightingale.

]]>
How a person receives a product is as important as the product itself, with truck drivers being a great way in understanding this point.

In the US, up to 72% of freight by value is delivered via truck. These products come through a combination of boats, planes, and or trains, but at some point a majority of them are driven to their final destination. Let’s say trucker drivers were uncharacteristically inconsistent and you were a small business owner. Could your business grow in this made-up system? Maybe. Growth would be difficult because no matter how good your product was it might not get to the right person in time, souring customers. The same is true for our designs. It’s not enough to have excellent data visualizations—you need to eliminate barriers to access by understanding the user’s journey.

In some cases a user’s journey is a simple search for information and there is ample research on how people search for information. The interest in studying information searching makes sense given the amount of choice in our world. That is, there are so many options or details for literally any product that users have to make more decisions overall in their journey. In fact, there is a whole field of study on the subject of a user’s journey—user experience (UX) and user interface (UI). Designing a top tier journey for your users, and not just their interaction, requires some level of expertise in either UX or UI. That said, I believe as designers we can still make better choices while we develop that understanding or connect with those experts.

Our contextual discussion so far has considered truck drivers, toothpaste brands, and computer science research. By any measure that is a lot of disparate topics, even for one of my articles. This is why I think it helps to review a more realistic example; an example that is based on my personal experience. The phone rings and it’s a prospect getting back to a decision maker ahead of schedule. The decision maker remembers this amazing dashboard you shared in a recent meeting and wants to share some insight from it. Unless they have a printed copy on their desk, the decision maker needs to locate your data visualization on their computer before they can study or share it. So, how does the decision maker find your creation? What steps do they take? What is our user’s journey?

In my experience at for-profit, not-for-profit, government, and academia, I have seen four major pathways to how people typically access data visualizations. Understanding these simple pathways can help you ease the user journey for your audience. If for no other reason than an awareness can allow you to become more efficient at these pathways. The major pathways that I have seen in data visualizations are: a gatekeeper, a file structure, a saved list of links, and a keyword search. Most organizations house more than one pathway and often their development is more organic need-based versus mindful systems-based.

A gatekeeper is a person or AI with which the users interact to get the content. This is the oldest pathway I know of and the only one that exists in every job I have ever had, to varying degrees. In our above example, the decision maker would not attempt searching for the content. They would either ask their aid to reach out to you or they would reach out to you themselves. This pathway represents a double edged sword. If the designer is receiving requests for their content, then it is clear the team desires to use that content. It is also clear that internal presentations in meetings are displaying the value of this content effectively. Otherwise they wouldn’t care to call. This is wonderful! The opposite side being that the user may view the designer as transactional versus consultative, calling only the moment they see a pain point. This implies their data visualization literacy is limited to knowing when the content is valid versus knowing the content. Over reliance on this model can create unrealistic time demands on your schedule. This is why most places also have some sort of self-service clearing house of content.

A close-up image of hanging files with file folders of different colors. Some words are understandable like Ad Hoc, Board, and Human Resources, while most are not.
Regardless of your experience or interface, documentation is always good. Keeping paper copies of designs, sketches, and or different team documents can help dissolve future questions.

A file structure is a series of folders, real or digital, that create a storage system the users can learn to locate the right content. Abstract or real versions of file structures are in lots of places from the library, to grocery store aisles, to your email inbox, and the local newspaper. Setting these up is harder than it looks. It in fact leads to the question, “how do you group things for users?” There is alphabetical sorting, which is very common due to the shared knowledge of the alphabet, but challenging given it requires the person to recall the name of the item. The decision maker who just got the call, may not exactly remember your title. This is why I tend to group my visualizations, when using this pathway, into non-overlapping categories each with its own definition. That way the decision maker need only recall that it was about “board meetings” and they can flip through the content in the folder. Definitions are key for the teammates understanding as they become references, but also for your own classification purposes.

A screen-shot of the collections section of Tableau Cloud with six boxes listed as 'Playlist' while the owner name for each is blacked out.
Creating lists with your internal consumers provides you a rare opportunity to review the latest reports with them. This introduces reports in a new way that may impact usage.

A list of saved links works like a favorites list from your web browser or a playlist from Spotify. In fact, here at The DeBruce Foundation, I refer to Tableau’s Collections as Playlists, which have been very helpful in cutting down the amount of content the user needs to process in their journey. There are also plenty of users who keep a list of links on their web browser. It’s hard to consider an analog example of saved links, perhaps the drawer or physical location? Regardless, the challenge of this pathway is also its strength. What the user needs is one click away. The focus allows for a great deal of speed and comfort; however, saving a link is something a user will only apply to known content of importance. New reports will not automatically populate a user’s Google Chrome favorite list and designers may struggle to introduce new content to their users. With a file structure, for example, when the user opens up the “board meetings” folder they can see any new additions.

This is where Tableau’s Collections really stand out. As an administrator to Tableau Cloud, I can add new reports to the user’s playlist on their behalf. It is important to gain approval first, but providing “white glove” experience in list management is something that shouldn’t be understated. If your clearinghouse for reporting has the ability to create user specific lists, then I would suggest it is one of the best pathways here.

A screen-shot of the Details section of a headshot JPEG file where the focus is on the Description: Title, Subject, Rating, Tags, and Comments.
Learning about this trick was incredibly helpful as I generate a lot of files and do not always perfectly recall my folder structure definitions. This is the power of multiple pathways to your work.

A keyword search involves users searching a website, file structure, or computer by inputting words or phrases associated with the content to locate that content. In a way this pathway is a more abstract version of the file structure with the keywords acting as folders grouping different items. Interestingly this function exists outside our design programs! Right click on any Microsoft Office file or JPEG and select the “Details” tab. From there should be a “Description” section. I believe at least the “Tags” item is searchable by file exploring software. This means you can apply this to your own internal documentation, not just in design interfaces like Tableau Cloud. The challenge here lies in which words become keywords. Select the correct words and the search process feels intuitive to the user. Select the wrong words and the staff now has to memorize a series of random prompts for them.

This pathway is by far the most powerful and the most complicated. Thanks to search engines’ popularity almost all of us know how to search via a search bar. This means teaching your teammates how to find designs is much more about pointing them in the right direction than introducing technology. This is part of the answer, but it does not yet address the question of which tag or keywords to use. Again, I am sure there is deep analysis on this subject, but for me I break this into two steps.

Step one, decide on a table of contents index of keywords or more free form. Table of contents works like grocery store signs—they are more or less the same in each store and folks have to memorize, to some degree, whether items fit in a category. This makes perfect sense with limited subject matter. That is classification is easier when there are less unique items to classify. If you have a more diverse collection of subject matter, then creating a universal classification system may be very hard to accomplish. This is why I tend to go the route of numerous keywords tying those words and their variants to different grouping topics. The upside is that there is little memorization required by the user, the downside is that the keywords cannot be used easily as an index.

Step two, decide on the keywords. In either case, index or free form, I think the best strategy is to build keywords around how people search for data visualizations. Below are the search concepts I have used before, these represent the meta categories that associate with a person’s interests. From these several keywords would need to be generated, example keywords are in the parentheses. Again, I must stress non-overlapping keywords and having some definition, even if loose, needs to be in your documentation.

  • Type of audience (Partner, public)
  • Type of associated department (Sales, partnerships)
  • Type of location (US, international)
  • Type of goal (Goal 1, goal 2c)
  • Type of topic (Board meeting, training)
  • Type of data (Survey, web traffic)
  • Type of data activity (Comparison, Composition)
  • Type of visualization format (Report, slide)
  • Type of chart (Area, Bar)

There is no silver bullet for helping teammates find your work. It takes time to understand the pathways already established in your environment and more time past that to master them. The key in any of these situations is documentation and consistency of action. Even in the gatekeeper pathway you can still set up office hours for when people can call you. That said as designers we are used to iteration in design and how teammates access your work is no different.

Cover image is AI-generated.

CategoriesHow To

The post How Can I Help People Find My Viz? appeared first on Nightingale.

]]>
22240
Interviewing AI Assistants for Data Visualization https://nightingaledvs.com/interviewing-ai-assistants-for-data-visualization/ Tue, 08 Oct 2024 15:52:43 +0000 https://dvsnightingstg.wpenginepowered.com/?p=22168 In today’s world, you need to run very fast just to stay in place. Technology is developing at an incredible speed, but I don’t believe..

The post Interviewing AI Assistants for Data Visualization appeared first on Nightingale.

]]>
In today’s world, you need to run very fast just to stay in place. Technology is developing at an incredible speed, but I don’t believe it will replace specialists! I believe it will become a loyal assistant, helping to eliminate routine tasks.

Our team is keeping a close eye on all the latest AI innovations that could be useful for data visualization specialists, BI analysts working with data, graphs, and dashboards. These professionals are expected to deliver insights and visual representations of all kinds of data for various business needs. So much to do, so much to handle! A little help from an assistant certainly wouldn’t hurt.

We regularly review interesting AI tools, and in this article, we want to briefly introduce a few of them, while diving deeper into one of the most exciting ones!

So, months of researching the AI data visualization market have brought us the following insights! Our main tester has been Anya, our marketing director and neural network specialist! You can read a few of her articles on this topic on Medium, where she reviews some of these products. Highly informative reading!

Now, let’s take a look at the list of potential assistants! Who will we hire for our team? Tell us a little about yourselves, dear candidates. I’ve heard that some of you are great at working with data, but not all of you understand charts. And some can even build dashboards? How about a trial period and a test task? All agreed?

Logos of the modern AI tools which can be useful to the data visualization person or BI analyst

Perplexity
What it does: helps gather and analyze information
Drawbacks: the service may be too compliant with your request. Better phrase it as: “I want to understand whether small businesses need content marketing. Give me answer with pros and cons”

Athenic
What it does: analyzes data and builds charts
Drawbacks: works only with one sheet of data and sometimes makes odd calculations

Julius
What it does: powerful data analytics with a user-friendly interface
Note: Ability to build and customize charts directly in the service
Drawbacks: good for quick data analysis, not suitable for complex calculations or merging datasets

ChatGPT
What it does: analyzes complex data and prepares visualizations
Note: Ability to build and customize charts directly in the service through new queries
Drawbacks: always useful to check data additionally to see trends and key findings. Also, clarify the logic of its calculations.

Basedash
What it does: builds dashboards 100 times faster than manual assembly
Drawbacks: decent interface, but not the most convenient. Does not connect data from the Excel and basic tables but support many SQL databases.

Rows
What it does: replace traditional tables, simplifies data analysis, automates dashboard and report creation, and eases collaboration on projects
Drawbacks: advantages include easy data integration and availability of templates. But honestly, sometimes it’s simpler and more convenient to use good old Excel.

Polymer Search
What it does: to simplify the dashboard creation process and data visualization by automating their creation and offering intuitive templates and AI features
Drawbacks: currently the most interesting tool on the list. Give it data, and it will build a dashboard on it!

This last candidate intrigued me immensely, so I invited them for a second round of interviews and personally had several conversations with them. We worked together on data visualization tasks and dashboard building! Based on the trial period results, help me decide—should I hire this assistant full-time?

Trial period: testing Polymer Search

When you think about creating dashboards, what’s the first thing that comes to mind? Probably Power BI, Tableau, and a nervous twitch, because working with tables and charts can take hours. But what if there’s a way to do it faster, easier, and without yelling at your screen? Enter the neuro-analytics service—Polymer Search.

Why do we even need Polymer Search?

Polymer Search claims that building dashboards is now so fast, you won’t even have time to make yourself a cup of coffee. Mmm, with cream?  

Here’s what it promises:  
– Dashboard created in one click  
– No coding required  
– Automatic data visualizations  

Okay, sounds cool, but we’re here to test claims against reality. So let’s see how it actually works.

But first, our seasoned expert, ChatGPT, will help the newcomer get up to speed and provide the initial data!

Step one: Using GPT to generate the data

Before we begin, let’s prepare the data for analysis. In our case, it’s webinar data where we want to understand which lecturer generates the most profit and who is working ineffectively. We sent a request to GPT, asking it to calculate the ROI for the webinars. GPT provided formulas and even suggested a table template that we could use for further filling.

We won’t dwell on working with ChatGPT in detail, as much has already been written about it, and we’ve previously explored the various useful aspects of this tool for data visualization people in the article: Creating a Dashboard Using ChatGPT.

First request:

ChatGPT prompt example

And the template:

Template for out ROI table, generated by ChatGPT

Now we can give the data to our candidate!

Step two: Importing data into Polymer Search

Now that we have the data in hand, let’s start testing our main candidate! We’ll upload the data into Polymer Search. Let’s explore how to work with it effectively. After all, every specialist requires a tailored approach!

And here’s what impressed me right away: Polymer Search generated the dashboard almost instantly. What usually takes several hours (or days if Excel decides to corrupt the file) is done in just a few seconds here.

Dashboard, generated by Polymer Search

Of course, the chart formatting needs improvement—diagonal text on bar charts. The pie chart also needs some adjustments! But it’s a good start.

The boss’s heart is already rejoicing; it seems this candidate could be useful! And Polymer Search not only created a dashboard layout but also suggested several key metrics for visualization:

  • Total revenue from webinars—it’s always interesting to see the overall numbers.  
  • Average ROI by lecturers—who sells and who just talks. This is important in business…  
  • Conversion of participants to paying customers—a metric that reveals the true state of affairs in the online school.  
  • Quality assessment of webinars—to gauge how much the audience enjoyed it, as we are in it for the long haul.

Step three: Setting up visualizations

Working with visualizations in Polymer Search is also a pleasure. You can choose from various types of charts, from simple bar graphs to more complex diagrams. All of this is done in just a few clicks; it feels like you have a reliable and understanding assistant who knows what you want with just a half-word or a glance. Like a dream!

Polymer Search suggests me a pie-chart for my task
We can add labels
And change the colors, not bad!

But let’s return to the real analysis tasks:  

For example, you can look at the revenue by webinar topics, which helps understand which topics have “hit the mark.” Or analyze the average ROI by lecturers to find out who brings in the most sales.

Result from the real task

Interesting plus: Predictive data  

One of the coolest features in Polymer Search is the ability to forecast data based on what has already been uploaded. The tool doesn’t just visualize current data; it also attempts to predict what will happen next. 

For example, you might see that one of your webinars could grow by 65% in the coming months. However, I can’t say for sure that this is a reliable forecast to depend on since I haven’t seen their calculations. 

But it’s intriguing! The assistant doesn’t just mindlessly fulfill your requests; it also knows how to dream about the future!

(It was the most interesting part—to play with the forecasts!

Step four: Automatic data refresh  

Another advantage of Polymer Search is its automatic data refresh feature. When you add a new webinar to the table, the data on the dashboard updates immediately. No more struggles with manual refreshes; everything happens quickly and smoothly. This truly makes life easier, especially when the data changes frequently.

Downsides of Polymer Search

Of course, despite all its advantages, Polymer Search has some limitations that should be considered. No tool is perfect; it’s essential to understand your assistant’s constraints from the start and not overload it with tasks that are beyond its current capabilities.

  • Data must be flat at input: The tool works only with a single table. If your data is spread across multiple tables, you’ll need to combine them.
  • Not all visualizations hit the mark: Some graphs look good but don’t always provide accurate or useful insights. You might occasionally need to make manual adjustments.
  • Limited chart customization: You can’t configure every aspect of the visualizations as flexibly as in Power BI or Tableau. This is sufficient for basic needs, but if you require in-depth control, you’ll encounter limitations.
  • Inflexible grid: Moving elements around on the dashboard isn’t always convenient. The tool offers minimal customization options for object placement.
  • Paid system: Yes, Polymer Search is not a free tool, but if you need to quickly create something decent and accessible online, the cost is justified.

Results of the trial period!

Polymer Search is a powerful tool for those looking to speed up the dashboard creation process and save time. Its predictive data and automatic updates make working with dashboards more efficient, but it’s important to be aware of its limitations. If you need to quickly create something on the fly, without delving deeply into settings and programming, Polymer Search is a nice choice for a personal assistant.

So, if you want to eliminate the routine in dashboard creation, Polymer Search is a good option. The time you save can be spent on something more enjoyable. 

Maybe finally brew that coffee? Mmm, with cream!

Well, that’s the overview of AI tools I prepared for you! 

If you have interesting ideas on how to use them effectively, share them on social media—I’d love to learn something new! The more wonderful assistants there are among AI tools, the more enjoyable the work will be for specialists! And I don’t believe they will be left without jobs; instead, many routine tasks can be shifted to the reliable shoulders of AI colleagues.

CategoriesUse Tools

The post Interviewing AI Assistants for Data Visualization appeared first on Nightingale.

]]>
22168
Implementing the Solution Framework in a Social Impact Project https://nightingaledvs.com/implementing-the-solution-framework/ Thu, 12 Sep 2024 15:02:48 +0000 https://dvsnightingstg.wpenginepowered.com/?p=21897 Those who know me will see that I love documenting every step of my data visualization process. I believe there’s so much to learn from..

The post Implementing the Solution Framework in a Social Impact Project appeared first on Nightingale.

]]>
Those who know me will see that I love documenting every step of my data visualization process. I believe there’s so much to learn from reflecting on what works and what doesn’t, and it helps avoid making the same mistakes in the future.

However, as I gained more experience, I discovered that clients often don’t share this enthusiasm. They tend to focus more on the insights rather than the process. Brent Dykes captures this well with his movie analogy, distinguishing between behind-the-scenes documentaries and compelling narratives.

But don’t worry if you’re passionate about sharing your process—everything is not lost. I’m happy to share that I’ve found the perfect audience for presenting the details of the data visualization process: Researchers!

In collaboration with Kevin Ford, we built an Educational Report for the EduVis Workshop at the VIS 2024 conference. This report outlines the data visualization process using the Solution Framework in a social impact project. Each stage includes practical lessons, making it a useful guide for educational and professional settings.

Diagram illustrating the six-phase solution framework for VizForSocialGood. The phases include 1) Understanding the problem, 2) Sketch ideas, 3) Prototype, 4) Harden the data, 5) Developing, and 6) Testing.
Representation of the Solution Framework

Why a solution framework?

A Solution Framework is important for data-driven teams and companies as it provides a structured approach to problem-solving and project execution. Implementing a framework offers several benefits, including:

Standardization and Consistency
Ensure that all projects adhere to the same high standards, reducing variability and enhancing quality.

Efficiency and Automation
Automate repetitive tasks, freeing up time for more strategic work and improving overall efficiency.

Collaboration and Communication
Facilitates better communication and collaboration among team members, ensuring that everyone is aligned and working towards common goals.

Case study: Social Impact Project

In this case study, we aimed to test the framework outside the corporate environment by participating in a social impact project promoted by VizForSocialGood (VFSG).

Who is VizForSocialGood?
In a data-driven world, VFSG emerged as a group of data visualization volunteers dedicated to helping non-profits with their data strategies. Since 2017, VFSG has united over 700 volunteers globally and assisted 41 non-profits, including the World Health Organization, Bridges to Prosperity, Build Up Nepal, Sunny Street, and many more.

How VizForSocialGood Works?
VFSG collaborates with non-profits to define project goals and structure datasets. They launch the project on social media, providing data visualization practitioners with resources, chat support, and feedback for creating compelling visualizations. Each project typically lasts one month, concluding with volunteers submitting their projects for selection and presentation.

Visualizing the social impact of VizForSocialGood

In March 2024, VFSG decided to share their own data with the goal of seeking help from volunteers to demonstrate tangible benefits and changes brought by the organization. They aimed to encourage continued support and engagement from the community, potential volunteers, and partners by illustrating the value and impact of these projects. Additionally, they sought to motivate the audience to support future projects by volunteering, partnering, donating, or spreading the word about VizForSocialGood.

In the following chat logs, you’ll see the practical implementation of each project stage, guided by the Solution Framework. These conversations between the mentor and mentee provide insight into the iterative process, challenges faced, and insights gained.

Portraits of the team members involved in the data visualization process, including the mentee, mentor, and directors of VizForSocialGood.
Participants in the data visualization process

Understanding the problem

Before diving into the data, it’s important to understand the audience and their questions. Identify their primary concerns and the top issues on their minds.

◻ Who’s the audience?
◻ What are their questions?

Mentee Mar 21, 2024
I reviewed the organization’s mission, values, and the project scope. The project goal is too broad, aiming to address all audiences simultaneously. I’m curious to know if a single visualization could achieve this or if I should focus on a single audience. I decided to sketch the audience and their main questions.

Abstract sketch depicting the VizForSocialGood audience. It includes volunteers with data visualization tools, funders represented by boxes of resources, the VFSG team as bridge builders connecting with charities, and the broader community interested in social impact projects.
Sketch of the audience and their main questions.

Operation Director Mar 24, 2024
So cool! We need more of these funders!

Mentee Mar 24, 2024
Oh wow! That’s helpful. Now, I have a narrow scope and know where to focus. But wait a minute, I don’t know anything about fundraising. Let me do a little research about it first.

Sketch ideas

Validate assumptions and involve the client in the ideation process.

◻ Confirm the problem
◻ User co-owners

Mentee Mar 29, 2024
Okay, I have done my research. I have looked up ways that non-profits sell the idea of raising funds. Two important things are showing funders the benefits of investing in the company and showcasing how the organization will spend the resources. I assume VFSG can spend the resources on operational costs, community engagement, and capacity training. I created a storyboard using a train analogy to illustrate VFSG’s mission and encourage donations. I think I’m ready to start developing.

Sketch of the solution, storyboarding.

Mentor Mar 29, 2024
Wait! Before moving on to development, please confirm your assumptions. I really like that you researched general fundraising strategies, but you need to confirm all your assumptions with your user first. In this stage, you are sketching based on your understanding, with your bias and background. Go back to the client so they can teach us about it and confirm if we are on the right path.

Mentee Mar 29, 2024
All right, I will contact the Fundraising Director, and we can set up a meeting to discuss my solution.

Meeting with Fundraising Director Apr 05, 2024
I would like to hear any ideas and the context around the storyboard and discuss how it could potentially be used in the fundraising plan for VFSG.

Mentee Apr 05, 2024
Wow, that meeting really helped me. I confirmed my assumptions and provided more insights about the fundraising strategy. The director also provided feedback on my sketch, and we decided that a short infographic would be more effective than a long-format presentation for this audience.

Mock up and prototype

Develop a more robust solution focusing on usability. Share prototypes with the client for feedback without emphasizing the numerical details at this stage.

◻ Dirty real data
◻ Confirm the solution

Mentee Apr 08, 2024
I created a new sketch. This one is easier to digest and only contains the most important information. As a personal touch, I kept the analogy of the train. I am going to share it with the Fundraising Director.

a sketch of the infographic structure and key indicators, a refined visualization organizing the information, and a draft visualization incorporating the initial sketches.
Sketch and prototype of the solution.

Fundraising Director Apr 10, 2024
This is amazing! I love your creativity and approach. I’m only wondering if there was a way to include how support will help change lives/global reach.

Mentee Apr 10, 2024
Thank you for your feedback! That’s a good point. Let me see if I can incorporate that without redundancy.

Harden the data

Automate data collection, processing, and presentation. Ensure the final dataset is coherent and reliable.

◻ Proper data sources
◻ Automate: What is the plan or procedure for future updates?

Mentee Apr 10, 2024
I started working on the data model and wrote the functions to calculate the main indicators. This step was efficient since I had defined all the necessary data in the prototype. 

Mentor Apr 10, 2024
Exactly, that’s why you need to define your solution before moving to development. It can be messy to program and figure out all the variables at the same time. Also, it makes it easy to build a process to automate and facilitate future visualization updates.

Develop content

Construct the interface while adhering to best practices in data visualization.

◻ Build interface
◻ Best Practices

Mentee Apr 11, 2024
I finished the solution. I used Figma for the prototype, Procreate for sketches, and Tableau for the visualization. I am going to share it with the client for their feedback.

Data visualization highlighting VizForSocialGood’s key indicators. The upper section presents the number of nonprofits helped, volunteers involved, and projects completed. The lower section features recognitions, social media statistics, a donation invitation, and a breakdown of resource distribution.
Finished Solution Before Feedback.

Testing

Conduct comprehensive testing with the target audience and data visualization experts.

◻ Beta testing
◻ Test outside Audience

Mentor Apr 11, 2024
Make sure you also get feedback outside of your audience. Why don’t you share the visualization with the Elevate DataViz community?

Mentee Apr 11, 2024
You were right. I received valuable feedback from both groups. From the VFSG community, I received feedback on copywriting. From the Elevate community, I received feedback about the placement of the Donate button. The original position didn’t align with how our eyes followed the infographic. Now, it’s in a better position where everyone can spot it easily.

Final data visualization of VizForSocialGood’s main indicators. The upper part includes the number of nonprofits helped, volunteers involved, and a donation invitation. The lower part displays recognitions, project counts, social media stats, and resource distribution details.
Final Visualization.

Presentation

Present the finalized solution to stakeholders, emphasizing both the process and the tangible impact of the project.

◻ How to communicate the insights?
◻ What metrics to highlight?

Mentee Apr 17, 2024
They really liked the final version and want us to present it in the live presentation. However, I’m still thinking about the Fundraising Director’s comment: “How support will help change lives/global reach.” I haven’t shown something more tangible.

Mentor Apr 17, 2024
In my company, to show the impact of our visualizations, we calculate how much analyst time we save annually through our work. We could track the hours volunteers spent on the visualizations and use a factor of freelancing charge for that time. If there are 20 participants, 20 projects, 20 hours for each individual, and $120 an hour, participants have donated $960,000 in data visualization design and analysis.

Mentee Apr 17, 2024
I like that idea. We should mention it in our presentation. 

Final Presentation Apr 19, 2024

Conclusion

In the end, it’s not just about the visualizations we create but the actions they inspire and the impact they have on our clients. By implementing the Solution Framework, we have created an effective data visualization and contributed to improving processes that support the organization’s mission. This case study highlights the importance of structured frameworks in driving social impact through data visualization.

VizForSocialGood May 2024
This work has inspired VFSG to convert volunteer time into hours and financial metrics. Now, we use these metrics in fundraising communications. This approach demonstrates the broader impact of their volunteers’ efforts and enhances their ability to communicate their value to potential funders.

VizForSocialGood May 2024
Together, our volunteers have contributed 11,000 hours and over $1 million worth of consulting and analytical services to 40+ non-profits worldwide.

Social media campaign graphic for VizForSocialGood. The message highlights that 76% of nonprofits lack a data strategy, and 79% lack time or personnel for data focus. VFSG connects these organizations with data visualization enthusiasts to create engaging visuals, helping them raise awareness, make data-driven decisions, engage donors, and expand their digital reach. The graphic notes that volunteers have contributed over 11,000 hours and $1 million in services to 40+ nonprofits globally.
New VFSG Fundraising message.
CategoriesHow To

The post Implementing the Solution Framework in a Social Impact Project appeared first on Nightingale.

]]>
21897