AI Archives - Nightingale | Nightingale | Nightingale The Journal of the Data Visualization Society Fri, 20 Mar 2026 17:43:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://i0.wp.com/nightingaledvs.com/wp-content/uploads/2021/05/Group-33-1.png?fit=29%2C32&ssl=1 AI Archives - Nightingale | Nightingale | Nightingale 32 32 192620776 The Tiles That Made Me: Mapping Friendship through the Lens of AI https://nightingaledvs.com/the-tiles-that-made-me/ Thu, 19 Mar 2026 12:00:00 +0000 https://nightingaledvs.com/?p=24653 According to the Oxford Dictionary, friendship is a “voluntary, personal relationship characterized by mutual affection, trust, and support.” Whereas to me, friendship involves authenticity and..

The post The Tiles That Made Me: Mapping Friendship through the Lens of AI appeared first on Nightingale.

]]>
According to the Oxford Dictionary, friendship is a “voluntary, personal relationship characterized by mutual affection, trust, and support.” Whereas to me, friendship involves authenticity and a trustworthy partnership that involves fun, kindness, and understanding.

It’s the size of the smile on your face when you see someone. It’s the decision to stay in touch with a niece long after family events end. It’s the fragile silence between you and a friend who couldn’t support a recent life choice.

As a data designer, I’ve always been obsessed with how we categorise the intangible. Recently, I set out to map the people who have shaped me. I didn’t want a balance sheet, but I did want to see the patterns. A relationship always evolves; this would only represent a snapshot in time.

The Taxonomy of Connection

I began by listing every person I care about. First from memory, then verified by my friends list on Facebook. But as I opened my spreadsheet, the questions started to flood in. Can family members count as friends? For example, my nieces and I have been chatting nonstop for years now. We grew fond of each other through the circumstance of birth, but we stayed in touch by choice. Does that make them friends? And what about friends who aren’t supportive of my life choices? We were very close 7-8 months ago, but we are not now. Are we still friends? If I exclude her from this, does that mean I have given up on our friendship? Also, I use the term “friend” very loosely. I am naturally familiar with strangers. Is my new neighbour — with whom I have shared a few cups of tea — my friend?

To make sense of the friend list, I distilled friendship into three core metrics, scored on a scale of one to three, three being the highest rank possible: 

  • Reliability: Loyalty, faithfulness, and the feeling of being safe.
  • Empathy: Supportiveness, kindness, and open communication.
  • Joy: Playfulness, liveliness, and shared common ground (though one might question whether friendship is required for common ground; for the sake of this visualisation, I decided it was).

I also added two judgment values: Duration (how long we have been friends), and Contact (how recently we spoke). To keep the data honest, I limited the scope to friends I had contact with in the last 24 months. I chose 24 months as a mark because it’s the period since my daughter was born. Spoiler alert: In a time when I often felt lonely as a new mother, the data showed me I was actually deeply loved.

From Sketching to Scripting

In my notebook, the design evolved rather quickly into a series of “tiles.” I remember having the visual in my head for a while, and I felt as if I were a vessel letting it out onto the paper. I wanted something that would represent the scale’s levels easily. Level one was a simple base; level three added complex detail. 

Source: Or Misgav

Initially, I used background colors to denote duration, but the palette was too loud. It made the story about “how good I am at making friends” rather than “how these friendships built me.”

Source: Or Misgav

Then came the pivot. Usually, I build these visualizations by clicking the mouse. A thorough process of copying, pasting, and double-checking layers in Illustrator and Figma would easily take three hours. But, inspired by the “vision to execution with a click” movement, I turned to Claude and Gemini.

I asked Gemini to help me write the prompt for Claude. It generated a Python script that processed my Excel file and generated stacked layers as PNG files. Claude taught me how to install Python on my Mac. (Honestly, I felt like I was back in the 90s, typing into a terminal to launch a game.) Then, “Boom. Your tiles are ready.” With a single click, the assets were generated. A few back-and-forths with Claude, and the grid was aligned. The work was done.

Source: Or Misgav

The Cost of Efficiency

As I looked at the finished folder, a strange feeling washed over me: I didn’t recognize the data. By automating the execution, I had accidentally bypassed the data familiarization stage — that meditative hour where you handle each data point with care and remember the person behind it. The tiles were beautiful, but they felt distant.

It raised a fundamental question for our field:
If the AI builds the layers, are we co-creators? Or are we just curators of our own memories?

End Result. Source: Or Misgav
How to read. Source: Or Misgav

The Tokens of Gratitude

Despite the digital distance, the final grid is a testament to my life. These tiles are me. They represent the people who stayed through puberty, the ones who signed my wedding book, and the new friendship that started when I collected my son from preschool, which grew close.

This project is more than a visualization; it’s a token of gratitude. It captures a snapshot of my soul as it exists in 2026. Shaped by humans, rendered by machines, and held together by the voluntary, personal relationships that make life worth mapping.

CategoriesData Art

The post The Tiles That Made Me: Mapping Friendship through the Lens of AI appeared first on Nightingale.

]]>
24653
The Shape of Why https://nightingaledvs.com/the-shape-of-why/ Mon, 09 Jun 2025 15:51:02 +0000 https://dvsnightingstg.wpenginepowered.com/?p=23704 From descriptive to prescriptive: How AI in data visualization is evolving from plotting the “What” to telling the “Why”. Today’s AI systems can draw charts...

The post The Shape of Why appeared first on Nightingale.

]]>
From descriptive to prescriptive: How AI in data visualization is evolving from plotting the “What” to telling the “Why”.

Today’s AI systems can draw charts. Some can even summarize trends. But what they still can’t do well, and what’s becoming increasingly important, is explain. If you ask, “Why did revenue in the Northern region drop in Q2?” you are not asking for a bar chart. You are asking for a diagnosis. 

You expect not just data, but reasoning. 

A system that not only visualizes what happened, but most importantly helps uncover why it happened. This is the shift from descriptive AI to prescriptive visual intelligence. And building that kind of system is not just about better models. It’s about rethinking how language, data, and visualization are connected in a layered reasoning pipeline. 

Let’s walk through what happens under the hood and what it takes to move from displaying numbers to constructing visual explanations.

From prompt to structured intention

When a user types, “Can you show why revenue dropped in the Northern region during Q2?” they’re not issuing a fixed instruction. They’re initiating a layered analytical request. 

From a prescriptive analytical perspective, this sentence has multiple layers of semantic structure: including metric inference, temporal framing, comparative logic, and implied causality.

For example:

  • “Why” triggers diagnostic mode. The system must look for explanatory factors.
  • “Revenue” isn’t always a physical field. It’s likely derived from price, quantity, or subject to discounts. 
  • “Northern region” may map to a dimensional hierarchy or a filter that spans several locations.
  • “Q2” may refer to a calendar or fiscal quarter, which must be resolved contextually.
  • “Dropped” implies change over time, but the reference point, e.g. Q1, is unstated. It must be inferred. 

These linguistic cues must be resolved into a structured analytical intent—a model of inquiry that determines what data is needed, how it should be interpreted, and how its meaning can be most effectively communicated through visualization.

Semantic layers

No meaningful analysis can begin without knowing what the user’s words refer to. In simple Business Object systems, like traditional BI platforms used for reporting, querying and analysing data – metrics and dimensions are often just labels in a schema. But in prescriptive systems, they must be interpreted through a semantic layer, a metadata-driven model that encodes definitions, relationships, and business logic. They exist, but the system doesn’t “understand” them.

But in more advanced, prescriptive systems, those terms must be interpreted through semantic lawyer – a metadata – driven model that defines what each concept means, how it relates to other fields, and how business logic should be applied.

When the system sees “revenue,” it assesses this layer to check:

  1. Is this a separate metric, standalone field, or a calculation?
  2. Does it exist in raw transactional data or only in summary views?
  3. Is it affected by other variables, like return date or discount policy?

For the “Northern region,” the system needs to resolve whether it corresponds to a field in a table, a label in a dimension hierarchy, or a mapping that requires joining multiple datasets. For “Q2,” it checks if the organization operates on a fiscal calendar. And if so, what date range Q2 actually covers.

These checks are not optional. Without them, the system might generate syntactically valid queries, but one that answers a completely different question than the user intended. 

The semantic layer acts as the system’s vocabulary of meaning—connecting human concepts to technical structure. This is what triggers “the explorative analysis” and makes intent computationally actionable.

How the AI plans its queries 

Once the question has been resolved into structured intent, the system must plan its data extraction. But here, explanation requires more than a single aggregation. It requires context, comparison, and a hypothesis space. 

First, the system retrieves the primary metric—revenue across the relevant time periods, Q1 and Q2, for the target segment, Northern region. This validates that the drop is real and quantifies it.

Then it begins to gather possible explanatory data: channel spend, conversion rate, discount frequency, product category mix, etc.—anything that may have shifted between those quarters. Each of these drivers usually exists in a different table. Each query needs to be scoped, filtered, and time aligned. 

For example: One query may compare marketing channel allocations between Q1 and Q2. Or another may retrieve average conversion rates by month.  A third may analyze shifts in sales volume by product type, etc.

Each result is a piece of potential evidence. The system is not drawing conclusions yet. It is preparing a “diagnostic map”—a time-aligned and scope-matched set of metric slices that could help explain the observed change. This is where prescriptive systems diverge from traditional BI. They don’t just retrieve facts, they retrieve candidate causes.

How the system identifies what matters

Once this diagnostic map is assembled, the system faces the most non-trivial challenge—identifying which of these shifts are plausible contributors to the revenue drop, and which are incidental.

To begin, It performs a layer of statistical screening: calculating deltas, highlighting drivers that show significant shifts or change between quarters, and filtering out noise. These are first-pass candidates—variables that changed when revenue changed.

But correlation is not an explanation. To go further, the system needs to assess causal relevance. It uses several methods in combination:

  1. Domain heuristics, embedded in the semantic layer (e.g. “conversion rate often drives revenue”)
  2. Trained models, such as XGBoost or other classifiers, which learn from historical data which combinations of changes have historically preceded metric shifts
  3. Causal pattern models, often built on transform architectures, which generate hypotheses in natural language and evaluate them against temporal and structural data patterns

Each of these methods outputs a confidence score—a measure of how strongly a given variable’s change is likely connected to the revenue drop. The system then ranks its findings, forming a stack of “most likely drivers.”

The goal is not to prove causality in the strict sense that would require controlled experiments. The goal is to build a narrative scaffold from statistically and semantically defensible signals. This scaffold is what the system will soon express—through both visuals and text—as its explanation.

Designing the visual explanation

Once the system has identified likely drivers behind the stage—and scored them by relevance—it must decide how to present that insight. This is not a matter of generating a bar chart or plotting a trend line. In prescriptive systems, visualization is not decoration—it’s a rhetorical structure. The chart is part of the argument.

The system begins by selecting a visual template that aligns with the structure of explanation. If the insights involve temporal shifts, it will likely favor line charts with layered annotations. If the cause is distributional—say, a change in product mix—it may use a stacked area chart or a small multiple layout. If the explanation depends on relationships between variables (example, spend → conversion → revenue), it might choose a flow diagram or funnel.

But the most important decision is not the chart type, It’s the narrative logic the chart is meant to carry. An explanation chart must do at least three things:

  1. Present the main metrics and it’s change over time
  2. Overlay or juxtapose the candidate driver
  3. Provide visual emphasis (e.g annotations, highlights) to guide the user’s interpretation

This process is guided by a visual grammar “engine.” Which is usually built on top of systems like a custom D3 abstraction layer. The system translates the insights in natural language. Something like: “Revenue in the Northern region dropped by 17% in Q2. This coincided with a 22% drop in conversion rate beginning in April, following a shift in marketing channel allocation. The trend is visible in the chart below.”

Together, the visual and the narrative constitute the system’s answer. Not just a chart. Not just a sentence. A multi-modal explanation, assembled to help the user not just see the data, but understand what happened.

Why this changes the role of visualization…and the analyst

The implications of this shift are deep. In traditional BI workflows, charts are static outputs—visualized summaries of queries written by humans. But in prescriptive AI, charts become expressive artifacts—generated as part of the system’s attempt to construct meaning. 

This changes both—how we build systems and how we interpret them. Analysts will no longer spend most of their time choosing chart types or filtering dashboards. Instead, they will work as editors of algorithmic reasoning: validating what the system inferred, refining its narrative framing and questioning its assumptions.

In turn, visual literacy itself evolves. It’s no longer just the ability to interpret axes and legends. It’s the ability to interrogate machine-generated visual arguments: “What did the system decide to highlight?” “What’s being compared?” “What was left out?”

In this new reality, charts are no longer the answers. They are hypotheses-drawn in real time, constructed by models, and subjected to judgment.

Conclusion: From charting to cognition

The future of AI is not about building dashboards faster. The future is about building systems that can synthesise patterns, identify causes and communicate explanations in ways we humans could easily grasp and understand.

Prescriptive visual intelligence doesn’t mean better chart automation. It means that when we ask questions like: 

Why did revenue fall? We don’t just get a number on a trend.

We get a visual theory.

We get a structured answer

We get a system that tries to explain the reasons, what led to what…

And in that moment data visualization becomes more than visual design. It becomes a language for computational thought.

All images were made using Napkin.ai

The post The Shape of Why appeared first on Nightingale.

]]>
23704
Transforming AI-Driven Data Analytics with DeepSeek: A Critical Analysis of Visualization Capabilities https://nightingaledvs.com/ai-driven-data-deepseek/ Thu, 15 May 2025 14:08:35 +0000 https://dvsnightingstg.wpenginepowered.com/?p=23564 Artificial intelligence (AI) has made significant strides in revolutionizing data visualization. From simplifying complex datasets to automating real-time insights, AI-powered tools are changing how businesses,..

The post Transforming AI-Driven Data Analytics with DeepSeek: A Critical Analysis of Visualization Capabilities appeared first on Nightingale.

]]>
Artificial intelligence (AI) has made significant strides in revolutionizing data visualization. From simplifying complex datasets to automating real-time insights, AI-powered tools are changing how businesses, researchers, and analysts interact with data. 

Among these tools, DeepSeek R-1 has emerged as a powerful AI model, offering advanced visualization capabilities that set it apart from conventional tools like Power BI and Tableau.

However, while AI-generated visualizations hold immense potential, skepticism remains regarding their accuracy, adaptability, and practical implementation. This article critically analyzes DeepSeek R-1’s visualization capabilities and highlights areas where AI still requires human oversight.

Fig.1: DeepSeek R-1 Training Pipeline Visualization. Credit: Harris Chan

Understanding DeepSeek R-1’s model 

DeepSeek R-1 is built on an advanced transformer-based architecture designed to process complex reasoning tasks, mathematical computations, and code generation. Unlike traditional visualization tools that require predefined templates and manual configurations, DeepSeek R-1 dynamically interprets data, leveraging deep neural networks to generate real-time, adaptive visual outputs.

Key Components of DeepSeek R-1’s Architecture

DeepSeek R-1 processes raw data through several critical components, as illustrated in the diagram below:

1. Tokenizer

  • The first step involves the tokenizer, which breaks down raw data into structured elements.
  • DeepSeek R-1’s tokenizer operates with a vocabulary size of ~130,000, ensuring it can interpret diverse datasets.

2. Transformer Blocks

  • The core of DeepSeek R-1 consists of 61 transformer blocks that analyze patterns and relationships within data.
  • These layers extract complex dependencies from input data, allowing the model to generate nuanced insights.
Fig. 2: DeepSeek R: 1 Transformer block. Credit: Jay Alammar.

3. Mixture-of-Experts (MoE) Mechanism

  • Unlike standard transformer models, DeepSeek R-1 integrates MoE Transformer Blocks, which dynamically route information through specialized experts.
  • Each input token is assigned to the most relevant expert, improving computational efficiency and enhancing adaptive learning capabilities.
  • The model includes 256 routed experts, with 8 active experts per token, optimizing performance for large-scale data visualization tasks.
Fig 3: A diagram illustrating the different components of DeepSeek R-1’s architecture. Credit: Jay Alammar.

Why This Matters for AI-Generated Visualizations

DeepSeek R-1’s modular architecture allows it to process vast amounts of data with a higher degree of adaptability than traditional visualization tools like Tableau or Power BI. Instead of relying on pre-built templates, it can dynamically generate data visualizations based on live datasets, making it particularly valuable for real-time analytics, trend forecasting, and pattern recognition.

This structural advantage explains why DeepSeek R-1 is increasingly considered a powerful alternative to traditional data visualization platforms, offering a more automated, AI-driven approach to data interpretation.

Comparing DeepSeek R-1 to traditional visualization tools

Traditional tools like Power BI and Tableau are widely used for data visualization, but they often require manual adjustments and human expertise to generate meaningful insights. DeepSeek R-1, on the other hand, aims to automate this process, leveraging AI to analyze data and create real-time visual outputs.

FeatureDeepSeek R-1Power BITableau
Real-time Adapability✔Fully AI-driven❌Limited❌Limited
Automated Insights✔Yes❌Requires manual input❌Requires manual input
Customization✔AI-generated based on content✔Manual customization✔Manual customization
Code-Free Interaction✔Yes❌Some coding required❌Some coding required
Scalability✔Can handle vast datasets efficiently✔Scalable, but with performance limits✔Scalable, but with performance limits

As seen in the table above, DeepSeek R-1 outperforms traditional tools in automation and adaptability. However, skepticism arises when discussing the accuracy of AI-generated insights compared to human-designed visualizations.

Performance Benchmark: DeepSeek R-1 vs. Traditional Tools

One of the most critical aspects of evaluating DeepSeek R-1’s effectiveness is comparing its performance benchmarks against other leading AI models and traditional visualization tools. DeepSeek R-1 has demonstrated superior results in several key areas, including mathematical reasoning, programming capabilities, and general knowledge assessments.

The following benchmark chart showcases DeepSeek R-1’s accuracy and percentile scores across various datasets, including AIME 2024, Codeforces, GPQA Diamond, MATH-500, MMLU, and SWE-bench Verified.

Source: DeepSeek

Key Takeaways from the Performance Comparison:

  1. Superior Mathematical and Logical Reasoning: DeepSeek R-1 outperforms OpenAI’s models (o1-1217 and o1-mini) in datasets such as MATH-500 (97.3%) and AIME 2024 (79.8%), demonstrating its robust mathematical computation capabilities.
  2. Strong Programming Performance: On the Codeforces percentile ranking, DeepSeek R-1 achieves a 96.3% score, making it a strong contender for code-related visualizations where AI can assist in identifying optimized patterns and debugging insights.
  3. General Knowledge and AI Adaptability: The model ranks higher than its competitors in the MMLU and GPQA Diamond categories, indicating its ability to generate well-informed, AI-driven insights for various industries.

How This Relates to Visualization

Unlike traditional tools like Power BI or Tableau, which require predefined templates and human intervention, DeepSeek R-1 automates insight generation by leveraging its advanced reasoning abilities. These benchmarks highlight why software can generate data visualizations that are both highly accurate and adaptive, reducing the need for constant manual adjustments.

By utilizing these strengths, businesses and analysts can rely on DeepSeek R-1 for real-time, AI-driven data visualizations that remain competitive with (and sometimes outperform) conventional tools. However, as discussed in the limitations section, AI-driven visualizations still require critical oversight to avoid misinterpretations and biases.

Real-world applications: AI-generated vs. human-created visualizations

One of the main criticisms of AI-driven visualizations is their lack of human intuition. While AI can process vast amounts of data quickly, it often lacks the ability to determine contextual relevance or interpret abstract patterns that a human analyst might recognize.

Example: AI vs. Human-Generated Dashboard

To illustrate this, let’s compare a human-designed sales performance dashboard with an AI-generated version created by DeepSeek R-1:

  • Human-Created Visualization:
    • Focuses on key performance indicators (KPIs) selected by an analyst.
    • Custom formatting to highlight trends.
    • Uses domain knowledge to emphasize crucial insights.
  • DeepSeek R-1-Generated Visualization:
    • Dynamically identifies correlations without predefined input.
    • Can highlight unexpected trends that a human might overlook.
    • Real-time adaptability based on data changes.

While AI-generated dashboards excel at real-time adaptability, human oversight is still necessary to refine the visual output for business relevance.

Source: GeoDelta Labs

Mind map summarization with DeepSeek R-1

Another unique capability of DeepSeek R-1 is its ability to create mind maps from large textual datasets. Mind maps help break down complex information into visually structured formats, aiding decision-making and strategic planning.

AI-Generated vs. Human-Generated Mind Maps

DeepSeek R-1 can instantly summarize a research paper or business report into a mind map. However, while the AI-driven process is fast, human analysts often add subjective insights that make mind maps more useful in decision-making.

Fig. 4: An AI-generated mind map illustrating the differences in detail and usability. Credit to the author.

Challenges and limitations of AI-generated visualizations

While DeepSeek R-1 showcases impressive capabilities, it is not without its limitations.

1. Data Interpretation Challenges

AI lacks contextual understanding beyond the dataset it is trained on. For example, while DeepSeek R-1 can generate correlations, it cannot determine causation, leading to misleading insights if left unverified.

2. Ethical Concerns in AI-Generated Visuals

AI models may inadvertently introduce biases present in their training data. If DeepSeek R-1 is trained on biased datasets, the resulting visualizations could reinforce misleading trends or incorrect assumptions.

3. The Need for Human Oversight

Despite its automation, DeepSeek R-1 still requires human intervention for:

  • Verifying the accuracy of visualized insights
  • Ensuring ethical considerations in data representation
  • Customizing outputs for audience-specific relevance

Balancing AI automation with human expertise

DeepSeek R-1 is a game-changing step in the realm of AI-powered data visualization. For businesses and analysts, its ability to automate insights, create real-time dashboards, and visualize complex patterns makes it a must-have for data analysis.

Nevertheless, its shortcomings—contextual misinterpretations, ethical biases, and a need for human oversight—reveal that AI-generated visualizations are not a substitute for domain specialization.

To fully leverage the power of AI-based data visualization, organizations should rely upon a hybrid model where AI does the heavy lifting, while human analysis polishes the final result. By doing so, enterprises make sure that their visual insights are accurate and meaningful, along with being fast and scalable.

The post Transforming AI-Driven Data Analytics with DeepSeek: A Critical Analysis of Visualization Capabilities appeared first on Nightingale.

]]>
23564
Interviewing AI Assistants for Data Visualization https://nightingaledvs.com/interviewing-ai-assistants-for-data-visualization/ Tue, 08 Oct 2024 15:52:43 +0000 https://dvsnightingstg.wpenginepowered.com/?p=22168 In today’s world, you need to run very fast just to stay in place. Technology is developing at an incredible speed, but I don’t believe..

The post Interviewing AI Assistants for Data Visualization appeared first on Nightingale.

]]>
In today’s world, you need to run very fast just to stay in place. Technology is developing at an incredible speed, but I don’t believe it will replace specialists! I believe it will become a loyal assistant, helping to eliminate routine tasks.

Our team is keeping a close eye on all the latest AI innovations that could be useful for data visualization specialists, BI analysts working with data, graphs, and dashboards. These professionals are expected to deliver insights and visual representations of all kinds of data for various business needs. So much to do, so much to handle! A little help from an assistant certainly wouldn’t hurt.

We regularly review interesting AI tools, and in this article, we want to briefly introduce a few of them, while diving deeper into one of the most exciting ones!

So, months of researching the AI data visualization market have brought us the following insights! Our main tester has been Anya, our marketing director and neural network specialist! You can read a few of her articles on this topic on Medium, where she reviews some of these products. Highly informative reading!

Now, let’s take a look at the list of potential assistants! Who will we hire for our team? Tell us a little about yourselves, dear candidates. I’ve heard that some of you are great at working with data, but not all of you understand charts. And some can even build dashboards? How about a trial period and a test task? All agreed?

Logos of the modern AI tools which can be useful to the data visualization person or BI analyst

Perplexity
What it does: helps gather and analyze information
Drawbacks: the service may be too compliant with your request. Better phrase it as: “I want to understand whether small businesses need content marketing. Give me answer with pros and cons”

Athenic
What it does: analyzes data and builds charts
Drawbacks: works only with one sheet of data and sometimes makes odd calculations

Julius
What it does: powerful data analytics with a user-friendly interface
Note: Ability to build and customize charts directly in the service
Drawbacks: good for quick data analysis, not suitable for complex calculations or merging datasets

ChatGPT
What it does: analyzes complex data and prepares visualizations
Note: Ability to build and customize charts directly in the service through new queries
Drawbacks: always useful to check data additionally to see trends and key findings. Also, clarify the logic of its calculations.

Basedash
What it does: builds dashboards 100 times faster than manual assembly
Drawbacks: decent interface, but not the most convenient. Does not connect data from the Excel and basic tables but support many SQL databases.

Rows
What it does: replace traditional tables, simplifies data analysis, automates dashboard and report creation, and eases collaboration on projects
Drawbacks: advantages include easy data integration and availability of templates. But honestly, sometimes it’s simpler and more convenient to use good old Excel.

Polymer Search
What it does: to simplify the dashboard creation process and data visualization by automating their creation and offering intuitive templates and AI features
Drawbacks: currently the most interesting tool on the list. Give it data, and it will build a dashboard on it!

This last candidate intrigued me immensely, so I invited them for a second round of interviews and personally had several conversations with them. We worked together on data visualization tasks and dashboard building! Based on the trial period results, help me decide—should I hire this assistant full-time?

Trial period: testing Polymer Search

When you think about creating dashboards, what’s the first thing that comes to mind? Probably Power BI, Tableau, and a nervous twitch, because working with tables and charts can take hours. But what if there’s a way to do it faster, easier, and without yelling at your screen? Enter the neuro-analytics service—Polymer Search.

Why do we even need Polymer Search?

Polymer Search claims that building dashboards is now so fast, you won’t even have time to make yourself a cup of coffee. Mmm, with cream?  

Here’s what it promises:  
– Dashboard created in one click  
– No coding required  
– Automatic data visualizations  

Okay, sounds cool, but we’re here to test claims against reality. So let’s see how it actually works.

But first, our seasoned expert, ChatGPT, will help the newcomer get up to speed and provide the initial data!

Step one: Using GPT to generate the data

Before we begin, let’s prepare the data for analysis. In our case, it’s webinar data where we want to understand which lecturer generates the most profit and who is working ineffectively. We sent a request to GPT, asking it to calculate the ROI for the webinars. GPT provided formulas and even suggested a table template that we could use for further filling.

We won’t dwell on working with ChatGPT in detail, as much has already been written about it, and we’ve previously explored the various useful aspects of this tool for data visualization people in the article: Creating a Dashboard Using ChatGPT.

First request:

ChatGPT prompt example

And the template:

Template for out ROI table, generated by ChatGPT

Now we can give the data to our candidate!

Step two: Importing data into Polymer Search

Now that we have the data in hand, let’s start testing our main candidate! We’ll upload the data into Polymer Search. Let’s explore how to work with it effectively. After all, every specialist requires a tailored approach!

And here’s what impressed me right away: Polymer Search generated the dashboard almost instantly. What usually takes several hours (or days if Excel decides to corrupt the file) is done in just a few seconds here.

Dashboard, generated by Polymer Search

Of course, the chart formatting needs improvement—diagonal text on bar charts. The pie chart also needs some adjustments! But it’s a good start.

The boss’s heart is already rejoicing; it seems this candidate could be useful! And Polymer Search not only created a dashboard layout but also suggested several key metrics for visualization:

  • Total revenue from webinars—it’s always interesting to see the overall numbers.  
  • Average ROI by lecturers—who sells and who just talks. This is important in business…  
  • Conversion of participants to paying customers—a metric that reveals the true state of affairs in the online school.  
  • Quality assessment of webinars—to gauge how much the audience enjoyed it, as we are in it for the long haul.

Step three: Setting up visualizations

Working with visualizations in Polymer Search is also a pleasure. You can choose from various types of charts, from simple bar graphs to more complex diagrams. All of this is done in just a few clicks; it feels like you have a reliable and understanding assistant who knows what you want with just a half-word or a glance. Like a dream!

Polymer Search suggests me a pie-chart for my task
We can add labels
And change the colors, not bad!

But let’s return to the real analysis tasks:  

For example, you can look at the revenue by webinar topics, which helps understand which topics have “hit the mark.” Or analyze the average ROI by lecturers to find out who brings in the most sales.

Result from the real task

Interesting plus: Predictive data  

One of the coolest features in Polymer Search is the ability to forecast data based on what has already been uploaded. The tool doesn’t just visualize current data; it also attempts to predict what will happen next. 

For example, you might see that one of your webinars could grow by 65% in the coming months. However, I can’t say for sure that this is a reliable forecast to depend on since I haven’t seen their calculations. 

But it’s intriguing! The assistant doesn’t just mindlessly fulfill your requests; it also knows how to dream about the future!

(It was the most interesting part—to play with the forecasts!

Step four: Automatic data refresh  

Another advantage of Polymer Search is its automatic data refresh feature. When you add a new webinar to the table, the data on the dashboard updates immediately. No more struggles with manual refreshes; everything happens quickly and smoothly. This truly makes life easier, especially when the data changes frequently.

Downsides of Polymer Search

Of course, despite all its advantages, Polymer Search has some limitations that should be considered. No tool is perfect; it’s essential to understand your assistant’s constraints from the start and not overload it with tasks that are beyond its current capabilities.

  • Data must be flat at input: The tool works only with a single table. If your data is spread across multiple tables, you’ll need to combine them.
  • Not all visualizations hit the mark: Some graphs look good but don’t always provide accurate or useful insights. You might occasionally need to make manual adjustments.
  • Limited chart customization: You can’t configure every aspect of the visualizations as flexibly as in Power BI or Tableau. This is sufficient for basic needs, but if you require in-depth control, you’ll encounter limitations.
  • Inflexible grid: Moving elements around on the dashboard isn’t always convenient. The tool offers minimal customization options for object placement.
  • Paid system: Yes, Polymer Search is not a free tool, but if you need to quickly create something decent and accessible online, the cost is justified.

Results of the trial period!

Polymer Search is a powerful tool for those looking to speed up the dashboard creation process and save time. Its predictive data and automatic updates make working with dashboards more efficient, but it’s important to be aware of its limitations. If you need to quickly create something on the fly, without delving deeply into settings and programming, Polymer Search is a nice choice for a personal assistant.

So, if you want to eliminate the routine in dashboard creation, Polymer Search is a good option. The time you save can be spent on something more enjoyable. 

Maybe finally brew that coffee? Mmm, with cream!

Well, that’s the overview of AI tools I prepared for you! 

If you have interesting ideas on how to use them effectively, share them on social media—I’d love to learn something new! The more wonderful assistants there are among AI tools, the more enjoyable the work will be for specialists! And I don’t believe they will be left without jobs; instead, many routine tasks can be shifted to the reliable shoulders of AI colleagues.

CategoriesUse Tools

The post Interviewing AI Assistants for Data Visualization appeared first on Nightingale.

]]>
22168
Making “The Pope on the Road,” an AI-Driven Interactive Site https://nightingaledvs.com/pope-on-the-road-ai-driven-interactive/ Tue, 26 Sep 2023 15:06:57 +0000 https://dvsnightingstg.wpenginepowered.com/?p=18665 Using ChatGPT and AI-generated images to build a scrollytelling story about Pope Francis's visit to Hungary.

The post Making “The Pope on the Road,” an AI-Driven Interactive Site appeared first on Nightingale.

]]>

Two images side by side, depicting the same event: Hungarian Prime Minister Viktor Orban meeting with Pope Francis. The left image showing the actual meeting, while the right image is a depiction generated by Midjourney AI.
Pope Francis meets Viktor Orban.
Sources: Facebook page of Viktor Orbán (left);
Pope Francis meets Viktor Orban, according to Midjourney (right)

Pope Francis visited Hungary for the second time in April 2023, and for this occasion, we at ATLO created an interactive site presenting the history of the Pope’s apostolic journeys, and the history of the papacy. With a list of new methods ranging from a parallax title screen to AI-generated imagery, the interactive package raised the bar for the quality of our present and future work. This article offers a glimpse into the methodology of the site “The Pope on the Road.”


On February 27th 2023, the president of the Hungarian Republic, Katalin Novák, announced that Pope Francis would visit Hungary between the 28 and 30 April. The Pope visited Hungary two years before that for the International Eucharistic Congress, and we at ATLO decided to make a project centered around papacy. However, the topic quickly lost relevance before we fully developed a publishable story. 

Pope Francis’s second visit then gave us a new opportunity to make an interactive data site about his travels. In the two years that passed, we published many other projects, and realised that a well-developed narrative structure is just as important for our work as the data that make up our projects. We knew from the onset that the structure of the article would be based on the core method of our previous work from February 2023 titled “And the Earth Shakes”: a scrollytelling story that uses the inverted pyramid narrative structure, where the most important information appears at the top and the broader, background, and contextual information comes after.

We came up with the title as soon as we decided to bring this new story to life: “The Pope on the Road.”

Title of the story over an AI-generated image resembling a renaissance fresco, reading "The Pope on the Road" in big bold letters, with a smaller subtitle "Pope Francis' Apostolic Journey to Hungary"

On its own, however, “The Pope on the Road” would not have been a significant improvement from the earthquake project. So, to spice things up, we added a new element to the article: the use of artificial intelligence (AI). 

AI tools are still in their infancy, but are already considered ground-breaking. Pictures of Pope Francis walking around in a Balenciaga-style puffer coat widely circulated both in the Hungarian and international press. In reality, these images were produced by an image-generating AI called Midjourney. These AI-generated and realistic illustrations of the Pope gave enough relevance and reason for us to use AI in “The Pope on the Road.” We’ll get into the details of AI image generation later in the article.

AI and the structure of “The Pope on the Road”

In addition to creating the illustrations for the article, we wanted to experiment with AI in other areas. The original plan was to have the entire article (structure, text, coding) written by another AI, ChatGPT. ChatGPT is able to give mostly competent answers based on short text inputs (prompts) in a plethora of topics. 

As we asked ChatGPT to provide a structure for the article, the AI gave us a structure that’s almost identical to the one we had come up with before consulting with it. We finally settled for the AI’s suggestion with added minimal modifications. 

Prompt:

Consider the following: you are a data journalist who specialises in creating interactive data articles using moderate coding, charts and maps in their work. Your next project would be about the Pope’s apostolic visits all around the world, for the special occasion that the Pope is visiting Hungary on the date of the publication of this project.

Write me an outline of this data project: what chapters would there be, with what logical structure? What data is required? How would the story be presented at each chapter, and even at each paragraph? You can use the combination of text, maps, charts, interactivity, sound, images and videos within this project.

ChatGPT:

Outline for Data Project: “Pope Francis’ Apostolic Visits Around the World”

I. Introduction
– Brief background information on the role of the Pope and his apostolic visits
– Explanation of the significance of the Pope’s visit to Hungary
– Overview of the data to be presented in the project

II. The Pope’s Apostolic Visits: A Global Overvie
– Overview of the Pope’s visits to different regions of the world
– Visual representation of the data using a world map
– Interactive elements allowing users to explore the data further
– Key insights on the most visited countries, the frequency of visits, and the reasons behind them

III. Historical Trends and Patterns in Papal Visits
– Analysis of the frequency and duration of Papal visits over time
– Comparison of the current Pope’s visits with his predecessors
– Graphical representation of the data using line charts and heat maps
– Key insights on the historical trends and patterns in Papal visits

IV. The Pope’s Visit to Hungary: Context and Implications
– Overview of Hungary’s religious and political landscape
– Discussion of the Pope’s itinerary and agenda during his visit to Hungary
– Analysis of the significance and potential impact of the visit
– Interactive map showing the locations of the Pope’s scheduled events in Hungary
– Key insights on the context and implications of the visit

V. Conclusion
– Summary of the key findings and insights from the project
– Reflection on the significance of the Pope’s apostolic visits in today’s world

Data Required:
– Historical data on Papal visits (frequency, duration, location, purpose, etc.)
– Data on religious and political landscape of Hungary and other countries visited by the Pope
– Data on the itinerary and agenda of the Pope’s visit to Hungary
– Maps, charts, and interactive elements for visual representation of the data
– Images and videos of the Pope’s visits

We started the story with the fourth chapter: “The Pope’s visit to Hungary.” This is the most relevant part of the topic, and we knew most readers would be interested in the Pope’s current stay in Hungary.

We added a chapter about the history of the papacy, making “The Pope on the Road” follow a structure even more similar to that of the earthquake project. Following the structural principles of the inverted pyramid, the most important information is presented at the beginning of the article (the Pope’s visit to Hungary). As the article progresses, we present interesting but less relevant data that are loosely related to the current events, and only serve to give topicality to the project (pastoral visits of previous Popes, the history of papacy).

We also asked ChatGPT to write some paragraphs, which were merely for playful attempts. The intro paragraph, however, is written entirely by the AI. And of course, it was not without errors: The AI guessed the Pope’s current trip to Hungary would be a four-day visit, so we corrected it to three days, the actual length of the trip.

Moreover, we asked ChatGPT to help us solve minor coding issues related to the map illustrations in the articles. The article includes many maps using the Mapbox GL JS framework, and although the Mapbox documentation is very thorough, we still faced an issue that was not documented neither on their website nor on Stack Overflow: we could not add topography to the Mapbox maps through the GL JS framework. The AI came up with a good solution based on a short prompt, so we have to give credits to ChatGPT helping us add the terrain layer to the Mapbox maps featured in the project.

Map of Budapest showing the locations the Pope visited the first day as dots, with added terrain
Screenshot of a map from the final story showing terrain layers.
(Source: “The Pope on the Road” – ATLO)

Scrolling is in, buttons are out – the importance of “scrollytelling” and other various methods

For the past decade, scrollytelling has been one of the most popular storytelling methods in online multimedia journalism. The 2012 The New York Times article titled “Snow Fall: The Avalanche at Tunnel Creek” was instrumental in bringing this method into the limelight. The method is now part of the toolset of many newsrooms. 

At ATLO, we have used scrollytelling since the beginning of the site’s existence. One can see this method being used in our earlier works, where we made fixed static images change on scroll (see our projects such as the “History of Budapest” and “Hungarian toponyms in the Carpathian Basin”). 

Animated roads

In our recent works, we have used a more advanced method, where scrolling manipulates interactive maps and charts (like in “Parliament 30,” “Budapest Street Names,” “And the Earth Shakes”). The same method is used in “The Pope on the Road.”  The extra technological advancement here was that we presented the route Pope Francis took each day during his visit to Hungary with an animated line triggered by scrolling.

Line animation showing Pope’s Francis trip on his first day in Hungary
Line animation showing Pope’s Francis trip on his first day in Hungary.
(Source: “The Pope on the Road” – ATLO)

Once triggered, the line is animated for the specified time. A tutorial of this method can be found on the official Mapbox site and on Chris Whong’s Github page. We wrote the Javascript such that the lines appear (turning from transparent to black) over 4 seconds. Also, we programmed that animation to show a road based on the chapter in which the road appears in the article. The lines could also be designed separately, which allowed us to make the line of the first day appear with lower opacity when the second day is presented, and the lines of the first two days appear with lower opacity when the third day is presented.

Line animation showing Pope’s Francis trip on his third day in Hungary
Line animation showing Pope’s Francis trip on his third day in Hungary.
(Source: “The Pope on the Road” – ATLO)

The immersive “Persepolis Reimagined” by Getty was a major inspiration for some of our technical solutions. The structure and method used in its opening sequence is echoed in the background of “The Pope on the Road” which appears during the three introductory texts, and the 3D parallax effect in the opening. We used the GSAP ScrollTrigger library and the Parallax.js Javascript library. The method used with the help of ScrollTrigger created a similar effect as the road lines, appearing gradually. But instead of appearing from a transparent start, the images would appear first as a black (opaque) image and the opacity would dissolve through the scroll to eventually reveal the image. The opening image, divided into three parts, takes shape during the three chapters using this method.

The three images building the parallax effect in the opening section of the articles varying from opacity 0 to 1.
The three images building the parallax effect in the opening section of the articles.
(Source: “The Pope on the Road” – ATLO)

The parallax effect, which allows a website’s background to move at a different speed than the foreground content, becomes most noticeable when the image is complete. This requires two factors to be taken into account within parallax.js. Our code specified different depths for each image layer (three in our case, one in background, one in middle ground and one in foreground position). The depth of an image is a value between 0 and 1. The greater the difference between the values of each layer, the greater the 3D effect.

Parallax effect in the background of the article title.
Parallax effect in the background of the article title.
(Source: “The Pope on the Road” – ATLO)

The Pope in Moving Panels

Another method inspired by “Persepolis” was the use of horizontally moving panels during vertical scrolling. Such panels bring up additional information, but instead of following the traditional vertical structure, they are horizontally positioned next to each other, moved from left to right by vertical scrolling.

Horizontal section movement schematic.
Horizontal section movement schematic.
Source: greensock.com

“The Pope on the Road” has five sections that move horizontally as you scroll vertically: three sections for the three days of the Pope in Budapest, one section for a chart showing the heatmap of the travelling Pope’s apostolic journeys by year and continent, and one section for a chart showing the timeline of all the Popes. This method is also based on the GSAP ScrollTrigger module, and we used this template for these sections.

Horizontally moving slides in action.
Horizontally moving slides in action.
(Source: “The Pope on the Road” – ATLO)

Music for the road

The use of background music is also inspired by “Persepolis.” We originally intended the music to start automatically when the page loaded, but most browsers no longer support the autoplay of sounds or videos with audio. To make any sound play on a website, the user has to interact with the page in some way, which could be as much as pressing a button. This is why a “sound” icon was added in the top left corner, which can be clicked to start the half-minute music snippet.

Realistic but fake Pope: using Midjourney AI images

The “picture” of the Pope in a puffer coat was just an episode in a longer story that begins to unfold, a story about AI-generated illustrations and images.

We wanted to include illustrations about the Pope’s visit to Budapest, but we didn’t want to infringe copyrights by using actual images. The images from previous visits could only illustrate events that had already happened. The connection between the Pope and AI had just stirred the world press, so this gave us enough reason to experiment with AI-generated images. 

Midjourney is one of the most advanced image-generating AI. This past year, the quality of the images it can generate improved tremendously, and with version 5, it can produce almost indistinguishable images (version 5.1, released in May, pushed this even further). Midjourney tries to visually interpret what you describe in writing. In a few seconds, it produces four versions of this written prompt, which you can either upscale and download at a higher resolution, or request variations for each of the four versions. These variations are made in the same composition, but using different faces, objects, colours, etc.

The image is divided into three sections. The first third shows a long text that is the initial prompt given to Midjourney. The second third shows the four variations from the prompt. The last third is one of the four images that was chosen, in large.
Midjourney prompt, variations and final image.
(Source: “The Pope on the Road” – ATLO)

With Midjourney, it is also possible to complement the text prompt with a real image that looks similar to what you would expect the AI to generate. Adding an image makes the person’s face or the composition of the image more accurate. This is how we created the images depicting Pope Francis getting off an airplane or waving from the car window.

The image is divided into three sections. The first third shows a text and a real life image that is the initial prompt given to Midjourney. The second third shows the four variations from the prompt. The last third is one of the four images that was chosen, in large.
Midjourney prompt with reference image, variations and final image.
(Source: “The Pope on the Road” – ATLO)

However, in multi-character images like the meeting between Viktor Orbán and Pope Francis, or the meeting between Katalin Novák and Pope Francis, it is much harder to support a prompt with an image of a certain person. It is either because the AI cannot decide who is on this image, and everyone will have the same face, or it will interpret the image as a stronger input, and recreate it with added elements from the text prompt. The image network on which the AI was trained contains images of many famous people, so it can produce an accurate face of Pope Francis even without an additional image, but Viktor Orbán looks less like himself, and Katalin Novák gives the impression of a completely different woman.

The image is divided into three sections. The first third shows a text and a real life image that is the initial prompt given to Midjourney. The second third shows the four variations from the prompt. The last third shows another four variations from the prompt. The eight images altogether are unsuccessfully generated, unused images.
Midjourney prompt with reference image, variations and final image, unsuccessful version.
(Source: “The Pope on the Road” – ATLO)

There are of course ethical concerns with AI-generated images, which should be self-evident to the lay user: can a notable person’s face be used to produce an image depicting a situation in which that person would not participate? As AI images become more and more realistic, will it be possible to determine whether the image is real or was created by AI? Will deepfakes become an even bigger issue with the combined use of AI-generated images and AI softwares that can animate these images? What impact will AI image creation have on the livelihoods of real artists, photographers, videographers, graphic designers, etc., if one can achieve almost indistinguishable results in a fraction of the time?

Can a notable person’s face be used to produce an image
depicting a situation in which that person would not participate?

In any case, these images will not disappear from our everyday lives. In fact, they will even become commonplace. However, there are some built-in limitations. In the first draft of the opening image, Pope Francis and Viktor Orban would have shaken hands in a church, symbolising that the Pope’s religious visit also has political significance. Interestingly, the word crucifix was part of the prompt describing the church in detail, and Midjourney did not produce any image. Instead, it warned us that frequent use of banned words would result in suspension of the user profile.

Many of the generated images depicted a situation that was partially similar to reality. One example is Pope Francis waving from his car. The generated image, however, still contains several errors: for one, Pope Francis is sitting in the driver’s seat. Although he has five fingers on his hands (a long-awaited success for AI, as it had trouble generating realistic hands and feet for a long time), in reality, he wears his ring on his right hand, not his left, as depicted by the AI.

Two images side by side. On the left a screengrab from a TikTok video showing Pope Francis waving from his car. On the right, a similar depiction of the Pope, generated by Midjourney AI.
Similarities between reality and image created with Midjourney.
(Source: “The Pope on the Road” – ATLO)

The Pope getting off and on the plane is depicted by the AI only in a way we wrote the prompt, without any additional knowledge about the world, or the Pope’s medical condition. In reality, the Pope’s current knee problems made him unable to walk down the stairs from the plane. Instead, he used the elevator on the other side of the plane:

Two images side by side. On the left a screengrab from a live broadcast showing Pope Francis getting off an airplane with the help of an elevator. On the right, Pope Francis getting off an airplane walking down stairs, generated by Midjourney AI.
Differences between reality and image created with Midjourney.
(Source: “The Pope on the Road” – ATLO)

The Pope met young people at the Papp László Budapest Sports Arena on April 29th and before his speech, he walked among the participants. In one of the pictures taken at this time, he puts his hand on the head of a young boy. Based on our prompt, the AI created a very similar picture of this event, showing the Pope and young people touching hands.

Two images side by side. On the left, a real image showing Pope Francis blessing a handicapped kid among a crowd of many kids, putting his hand on the kid's head. On the right, Pope Francis holding the hands of kids standing in the front of a big crowd in an arena, generated by Midjourney AI.
Similarities between reality and image created with Midjourney.
(Source: “The Pope on the Road” – ATLO)

A similar situation was portrayed when he met poor people and refugees at The Church of St. Elizabeth of Hungary. The AI produced a similar image:

Two images side by side. On the left, a real image showing Pope Francis sitting in a church, reaching out his hand and holding the hand of an elderly woman in a wheelchair. On the right, Pope Francis reaching out his hand towards an elderly woman close to him in a room full of people, generated by Midjourney AI.
Similarities between reality and image created with Midjourney.
(Source: “The Pope on the Road” – ATLO)

In conclusion, our aim with using AI-baked elements in “The Pope on the Road” was to try out the new technologies, and to show that we are dealing with tools capable of not only helping us compose articles, but also producing visually correct and appealing images.  As these technologies are still in their infancy, our experience was not without flaws. 

Nevertheless, when using the AI image generator for this article, we learned its capabilities and limitations, and came up with an end result that was very close to our initial intentions. We also ended up using ChatGPT considerably less than we originally anticipated, but it was still able to help with the structure, writing, and coding (to an extent), which allowed us to achieve a better end result.

As for the use of AI images in journalism, we believe it could be useful in the future to illustrate events that only have textual evidence or testimonies. The images can serve as visual illustrations of such events, albeit possibly requiring the supervision of an actual person to be the final judge whether the image is factually, visually, or ethically correct, and is usable. Given its advantages, we at ATLO believe this technology should not be rejected but embraced in the future.

The full feature is available here.


This article was edited by Noëlle T. Rakotondravony.

CategoriesHow To Use Tools

The post Making “The Pope on the Road,” an AI-Driven Interactive Site appeared first on Nightingale.

]]>
18665
Livable cities’ urban networks https://nightingaledvs.com/livable-cities-urban-networks/ Thu, 26 Jan 2023 14:00:00 +0000 https://dvsnightingstg.wpenginepowered.com/?p=15637 Road networks are magnificent bird-eye view fingerprints of cities, age-old topics of urban planning, and stable cornerstones of spatial data science. As a primary goal..

The post Livable cities’ urban networks appeared first on Nightingale.

]]>
Road networks are magnificent bird-eye view fingerprints of cities, age-old topics of urban planning, and stable cornerstones of spatial data science. As a primary goal of today’s urban planning is to design livable, future-proof cities via concepts like the 15-minute city, here I collect the top lists of most livable cities and give a visual overview of their road networks – with a ChatGPT twist. 

Which are the most livable cities on Earth?

Year after year, numerous rankings are published based on which cities are the most livable, with  Western Europe, Canada, and Australia usually topping these lists. These lists try to factor in as many aspects as possible, from public transport to healthcare and urban health. As they intend to be very general,it might very well happen that for certain individuals, even myself, the personalized top list would look different. Diving into this debate may be above my paygrade, so instead, I collected six different rankings credible enough to be included in the City quality of life indices article on Wikipedia.

The lists included in this collection are the following: the ​​Deutsche Bank Liveability Survey, Global Finance’s World’s Best Cities to Live, Mercer’s Quality of Living Ranking, Monocle’s Quality of Life Survey, Numbeo’s Quality of Life Ranking, and The Economist Intelligence Unit’s. All these lists capture somewhat different angles of livability, yet they also show quite some similarities. Aiming at the most generic list of cities possible, I merged them all and picked those cities that were included in at least two of them – which was  twelve cities total.

Turns out Zurich is the most popular one here – it appeared on five of the six lists, followed by a tie between Vienna and Copenhagen as each made it to four rankings. Next comes Vancouver, Melbourne, and Sydney which all made it to three of the top lists. Finally, Geneva, Frankfurt, Amsterdam, Helsinki, Tokyo, and Wellington were all double-listed. In the later part of this article, I will refer to these twelve cities as the most livable cities (according to the merged rankings of different but professional opinions).

A color palette from the AI

During the past several weeks, OpenAI’s ChatGPT has been making huge waves in countless domains from AI research to copywriting. As my background is as far from design as Physics, I am not a natural talent with color pallets and such, so instead decided to shoot my shot and ask a few questions from the chatbot like this:

me:
Give me please a color palette of 12 pairs of colors which include the pairs of CherryRed and OffWhite, Coral and Yellow, and BubblegumPink and CherryRed when the colors are ordered based on the primary colors.

chatbot:

Sure, here is a color palette of 12 pairs of colors that include the pairs of Cherry Red and Off White, Coral and Yellow, and Bubblegum Pink and Cherry Red, ordered based on the primary colors:

Cherry Red (#ff0000) and Off White (#f5f5f5)
Coral (#ff7f50) and Yellow (#ffff00)
bubblegum Pink (#ffc0cb) and Cherry Red (#ff0000)
Red-Orange (#ff4500) and Lime Green (#32cd32)
Orange (#ffa500) and Green (#00ff00)
Lime Green (#32cd32) and Turquoise (#40e0d0)
Green (#00ff00) and Blue (#0000ff)
Turquoise (#40e0d0) and Purple (#6600ff)
Blue (#0000ff) and Off White (#f5f5f5)
Purple (#6600ff) and Coral (#ff7f50)
Off White (#f5f5f5) and bubblegum Pink (#ffc0cb)
Coral (#ff7f50) and Cherry Red (#ff0000)

As a reminder, keep in mind that color palettes often depend on the context, the purpose, and the medium where you will use them. Also, this ordering is based on the traditional primary colors but it is not always the case, some palettes could be organize differently.

After receiving a detailed answer in a blink of an eye, ChatGPT even bothered to give me a kind reminder – how thoughtful! After a few different versions of these questions, I ended up settling with two color palettes which I manually merged. The main reason I had to do that was to increase the background-foreground contrast, as the diagrams I have here really need that. While for human eyes, these contrast ratios are probably quite intuitive, it still seemed to be too challenging to properly explain it to the AI – for now.

Finally, Figure 1 shows the pairs of colors generated and matched by the AI, and organized by me. 

Figure 1. Color palette created in collaboration with ChatGPT.

Urban network visualization

Visualizing and studying urban networks, such as the road network of cities, is a crucial point in numerous spatial data science applications. It comes up in any transportation-related issue from traffic jam forecasting to public transport reconstruction or micro-mobility station localizations and has a major role in accessibility and achieving 15-minute walkable cities as well. Also, they look cool and it’s fun to work with them!

So first, I went for my go-to tool, OSMnx, a Python package that combines the most widely used network science package (NetworkX) and the 101 of spatial data in Python (GeoPandas) by allowing us to download, manipulate, and analyze geospatial data from OpenStreetMap fairly easy.

One of the intro-level functionalities is to download the road network of a specified geometry, such as the administrative boundary of a city. To test out how things work, I used my hometown, an indeed small town of about 10k people in the countryside of Hungary – Siklós. Sikós both has a nice castle and an interesting road network, so the only thing I had left was to combine this network data with the color palette I co-created. Hence, I visualized the inner part of the city, a circle around its centroid with a radius of 1.3km. Additionally, I only used Python, specifically, Matplotlib to create these maps.

Figure 2. The road network of the town Siklós in Hungary with different colorings.

Top cities visualized

Finally, I created a gallery of the top twelve most livable cities based on the joint ranking, each visualized using the AI-generated color pairings on the following figures.

The post Livable cities’ urban networks appeared first on Nightingale.

]]>
15637
I Asked an Artificial Intelligence to Draw a Chart https://nightingaledvs.com/i-asked-an-artificial-intelligence-to-draw-a-chart/ Wed, 29 Jun 2022 13:00:00 +0000 https://dvsnightingstg.wpenginepowered.com/?p=11625 A new generation of text-to-image generative models are rapidly changing what is possible in generative design. Two apps in particular, Midjourney and Dalle-E 2, have..

The post I Asked an Artificial Intelligence to Draw a Chart appeared first on Nightingale.

]]>
A new generation of text-to-image generative models are rapidly changing what is possible in generative design. Two apps in particular, Midjourney and Dalle-E 2, have changed the nature of generative imaging by combining advances in text model generation and image model generation to create something more nuanced than previous techniques.

Yoda drinks a beer generated by Dall-E 2

The concept of text-to-image is simple: a user submits a text prompt such as “Yoda drinks a beer” and the neural networks use the text to create a new image based on the associations in the language. When the concept was introduced, the results looked like a mash of uncanny valley nightmares and jpeg compression. As you can see on the left, the new models allow far more stylistic illustration of the language, determined not only by text as metadata but also by the context of where that language may appear. The many improvements in both the text models and the image models have created a whole new way to think about imaging and how we might create prompts to describe it.

There’s a whole world of ideas to explore here, but I have been interested in using these generative models to help us see data visualization designs from a new perspective. My interest in this topic began while using Midjourney – and while I think the results are better for dataviz purposes using that platform – I just gained access to Dall-E 2 so exploring the differences between the apps is equally interesting. I have a Twitter thread here that discusses my initial comparisons between the two platforms.

You begin the creative process to generate these images with the text prompt. From there you are presented with six images that you can then download or use to create more variations. The process is iterative and selecting which image to create variations from results in a sort of story. This article details my journey during a single exploratory session of about one hour using Dall-E 2.


In honor of Juneteenth, I started with the prompt “Data visualization about economic progress in the African-American community since 1886 as designed by Otto Neurath”. The six results are visually interesting but not accurate to the data, time period, subject matter, chart type, or design style of Otto Neurath.

Version 1.1
Version 1.2

The third variant of the first pass was at least a bit more interesting. While not even close to being an isotype, it’s a fairly accurate bar chart with photographic styling (image 1.3). The horizontal gradient at the top could be a fun design inspiration under the right circumstances.

Image 1.3

Versions of 1.3 below created even artier styles of stacked photographic bar charts with lighted shaded backdrops. The two iterations below are particularly lovely. I’ll note again that the actual subject matter of “economic progress in the African-American community since 1886” was not recognized by the model. I suspect that Dall-E 2’s image model is trained using photographs and more recent art so it likely overrides terms with fewer results. None of the generative models seem to be able to recreate words.

version 2.1
Version 2.4

There was one more version of 1.3 that I liked – the image below with the label overlapping the bars of the chart. This version seems taboo by dataviz design standards, but is it really so wrong if we measure the tops of the bars? The photo textures also seemed busier, but the tops of the bars are always dark. I dig it!

Version 2.3

There was also a stylistic outlier in this second iteration. The version below seemed to drift away from the other charts and towards… a charcoal drawing? It is quite subtle and feels like a 1970s conceptual drawing by Robert Smithson. It is beautiful and surprisingly sumptuous.

Version 2.2

I created two versions below left (2.2-v1) and below right (2.2-v2). I think either of these would make a handsome illustration pretty much as-is (and heck maybe you’ll see more of these gracing Nightingale in the future!) The left image reminds me a bit of a beach, and the right image reminds me of a picket fence. I don’t mind it being so literal if the subject matter fits and both have this lovely wet-media texture as if they were drawn with ink.

Version 2.2-v1
Version 2.2-v2

I wanted to see versions of the sand dune vibes (2.2-v1) so here are four takes of it below. I think all of them are interesting but I think I like (2.2-v1-v3) most of all as the diagonally stacked bar charts are amazing plus the inverse of the white over the dark background is stunning:

2.2-v1-v2
2.2-v1-v1
2.2-v1-v3
2.2-v1-v4

I also generated this version (3.1) below on my third pass. As you can see, it took the photographic bar charts and split the design into a sort of set of small multiples. The labels are chunky and what looks like area charts start to morph into high-contrast landscapes.

Version 3.1

In the two versions below (3.1 v1) and (3.1 v2), the chart is entirely gone resulting in a grid of monochromatic images. I like the dark green on the yellowish background especially. These also feel like they wouldn’t be out of place in a 1970’s conceptual art book.

3.1-v1
3.2 – v2

Moving back to version 3.2 – an iteration of the photographic bar chart that preceded the above images – produced a REALLY interesting graphic below. Not only do I have a design that could read as a multi-dimensional illustrative dataviz, but also has some built-in comparisons at the bottom of the images (changes in growth?) Pending the right data and communications objective, the ornamentation could be both intersting and meaningful.

Version 3.2

Things get even more interesting in this version of (3.2) below – buildings, slope graphs, cut-away topographies, and blueprints all in monochromatic green are arranged on some kind of… stacked distribution? This is an exciting almost-punk-rock xeroxed vibe and I am totally here for it!

3.2 – v1

Below are two more versions based on the previous image (3.2 – v1) but you can see the interpolation continues to skew more graphic (and wonderfully so).

In this version (3.1-v2) the inverted V shape clefts the design into an ordered divergence. The tops of each bar are now illustrative even more than photographic, leaving ghostly remainders below. At the bottom of the image a single bar reminds me of the mid-western horizontal spans of the architecture of Frank Lloyd Wright or an Art Deco glyph. How interesting!

3.1-v2

Version (3.1-v4) below is a Rorschach test of associations and patterns. The bars are now glued together by striated horizontal stripes to create a sort of cut-away landscape. The results make me think a bit about the speculative UX design by Territory Studios for Blade Runner – something alien and ancient.

3.1-v4

The below image (3.1-v4-v1 …how do people name these things?) shows the chart giving way to a series of vertical slices of a landscape. The bar chart seems to have turned into the remnants of collapsing buildings along the bottom.

3.1-v4 -v1

I stopped further refinement here as the chart attributes disappeared from the generative images, and that’s was really the focus of my exploration. Maybe I’ll just keep making 20 versions of a single chart one day, but until then, I’ll end this post with some other images that I didn’t cover above but are pretty fun.

3.3
3.4
3.5
3.1-v3

These generative models have the power to help us to see how the data design could be handled totally differently, not just as a surrealist impression, but as an input into considering chart formats in a new way. While I have not tried to put these design concepts into practice yet — that will certainly be among the next steps. Let’s see how it continues to develop — I’m sure there will be more to explore!

The post I Asked an Artificial Intelligence to Draw a Chart appeared first on Nightingale.

]]>
11625