Hero’s Journey Story Generator in Python with Ollama

From my first bedtime story to late‑night coding sessions, I’ve always been captivated by the power of myth. This tool—Hero’s Journey Story Generator—is born from that passion. It stitches together your personal details and time‑tested narrative arcs to create legends that feel both intimate and epic. Whether you’re an educator looking to spark students’ imaginations or a developer craving creative side projects, this generator offers: A familiar story framework (Campbell’s Hero’s Journey) that resonates across cultures Rich symbolic depth (Thompson Motif Index) for authentic mythic flavors Local AI inference (Ollama) so you stay in control of your data Offline narration (pyttsx3) for immersive, hands‑free listening Markdown export for seamless integration into blogs, docs, or wikis I’ve structured this guide to walk you through each component—no theory overload, just practical, tested code you can adapt today. SPONSORED By Python's Magic Methods - Beyond init and str This book offers an in-depth exploration of Python's magic methods, examining the mechanics and applications that make these features essential to Python's design. Get the eBook Objectives The objectives for the code and also for this article are to create an easy, maintainable solution that shows a clear separation of concerns and good architecture, even for a simple and quirky script like this one. The objectives are: Personalize each myth with the user’s name, birthdate, and life stage. Enrich narratives using authentic folklore motifs. Structure the code into clear, testable modules. Secure data and inference by running entirely offline. Deliver both text and audio outputs for maximum accessibility. Campbell’s Hero’s Journey Overview At the heart of every epic lies Joseph Campbell’s Hero’s Journey, a 12‑stage narrative template that underpins myths from all cultures. Key stages include: Ordinary World – The hero’s familiar environment Call to Adventure – A challenge beckons Refusal of the Call – Initial hesitation Meeting the Mentor – Wise guidance appears Crossing the Threshold – Entering the unknown Road of Trials – Facing tests and ordeals Transformation – Deep metamorphosis Atonement & Reward – Claiming the treasure Return with Elixir – Bringing knowledge back home By mapping our generator’s prompts to these stages, we ensure each story follows a familiar arc while remaining uniquely personalized. Ollama: Local LLM Interface Ollama is a command‑line tool and Python client that hosts and runs large language models on your own machine. Instead of relying on remote APIs, Ollama pulls models (e.g., llama3.2:1b) locally and serves them via a simple interface. This allows for: Privacy: No data leaves your device. Performance: Local inference eliminates network latency. Small models can run on a consumer CPU. Flexibility: Swap models by changing a single string in your code. Thompson Motif Index Explained The Thompson Motif Index (TMI) is a comprehensive catalog—over 6,000 motifs—that scholars use to track recurring narrative elements across global folklore. Examples include: B210: Animal helper aids hero U752: Magical mirror reveals truth By sampling three motifs from a local JSON file, our generator injects symbolic depth into otherwise generic AI output, grounding each tale in mythic tradition. Obtaining tmi.json To obtain a well-prepared JSON file that includes all the motifs, follow these steps: Visit the fbkarsdorp/tmi GitHub repository. Navigate to data/tmi.json and click Raw. Save the file to your project directory as tmi.json. Ensure your script can read it (same folder or update the path). Architecture Overview The generator’s pipeline follows a clear, linear flow, ensuring each component interacts seamlessly. Here’s a visual representation of the architecture: Architecture Diagram The flowchart shows three phases—Setup, Generation, and Output—with each method encapsulated in its own cluster for clarity: Setup: check_model() → get_user_input() → load_motifs() Generation: generate_hero_journey() Output: save_to_markdown() → narrate_story() Architecture Breakdown Setup check_model(): Verify or pull the selected LLM, guaranteeing local inference. get_user_input(): Capture and validate the hero’s name and birthdate. load_motifs(): Load the TMI JSON and sample three motifs to theme the narrative. Generation generate_hero_journey(): For each of the six Hero’s Journey stages, build a context-aware prompt (including name, age, motifs) and invoke the LLM to produce the story segment. Output save_to_markdown(): Compile metadata and all story segments into a styled Markdown document. narrate_story(): Sequentially synthesize speech for each story part, creating an audio companion to the text. This modular structure ensures each phase is isolated, testable, a

Apr 24, 2025 - 08:59
 0
Hero’s Journey Story Generator in Python with Ollama

From my first bedtime story to late‑night coding sessions, I’ve always been captivated by the power of myth.

This tool—Hero’s Journey Story Generator—is born from that passion.

It stitches together your personal details and time‑tested narrative arcs to create legends that feel both intimate and epic.

Whether you’re an educator looking to spark students’ imaginations or a developer craving creative side projects, this generator offers:

  • A familiar story framework (Campbell’s Hero’s Journey) that resonates across cultures
  • Rich symbolic depth (Thompson Motif Index) for authentic mythic flavors
  • Local AI inference (Ollama) so you stay in control of your data
  • Offline narration (pyttsx3) for immersive, hands‑free listening
  • Markdown export for seamless integration into blogs, docs, or wikis

I’ve structured this guide to walk you through each component—no theory overload, just practical, tested code you can adapt today.

SPONSORED By Python's Magic Methods - Beyond init and str

This book offers an in-depth exploration of Python's magic methods, examining the mechanics and applications that make these features essential to Python's design.

Get the eBook

Objectives

The objectives for the code and also for this article are to create an easy, maintainable solution that shows a clear separation of concerns and good architecture, even for a simple and quirky script like this one.

The objectives are:

  1. Personalize each myth with the user’s name, birthdate, and life stage.
  2. Enrich narratives using authentic folklore motifs.
  3. Structure the code into clear, testable modules.
  4. Secure data and inference by running entirely offline.
  5. Deliver both text and audio outputs for maximum accessibility.

Campbell’s Hero’s Journey Overview

At the heart of every epic lies Joseph Campbell’s Hero’s Journey, a 12‑stage narrative template that underpins myths from all cultures.

Key stages include:

  • Ordinary World – The hero’s familiar environment
  • Call to Adventure – A challenge beckons
  • Refusal of the Call – Initial hesitation
  • Meeting the Mentor – Wise guidance appears
  • Crossing the Threshold – Entering the unknown
  • Road of Trials – Facing tests and ordeals
  • Transformation – Deep metamorphosis
  • Atonement & Reward – Claiming the treasure
  • Return with Elixir – Bringing knowledge back home

By mapping our generator’s prompts to these stages, we ensure each story follows a familiar arc while remaining uniquely personalized.

Ollama: Local LLM Interface

Ollama is a command‑line tool and Python client that hosts and runs large language models on your own machine.

Instead of relying on remote APIs, Ollama pulls models (e.g., llama3.2:1b) locally and serves them via a simple interface.

This allows for:

  • Privacy: No data leaves your device.
  • Performance: Local inference eliminates network latency. Small models can run on a consumer CPU.
  • Flexibility: Swap models by changing a single string in your code.

Thompson Motif Index Explained

The Thompson Motif Index (TMI) is a comprehensive catalog—over 6,000 motifs—that scholars use to track recurring narrative elements across global folklore.

Examples include:

  • B210: Animal helper aids hero
  • U752: Magical mirror reveals truth

By sampling three motifs from a local JSON file, our generator injects symbolic depth into otherwise generic AI output, grounding each tale in mythic tradition.

Obtaining tmi.json

To obtain a well-prepared JSON file that includes all the motifs, follow these steps:

  1. Visit the fbkarsdorp/tmi GitHub repository.
  2. Navigate to data/tmi.json and click Raw.
  3. Save the file to your project directory as tmi.json.
  4. Ensure your script can read it (same folder or update the path).

Architecture Overview

The generator’s pipeline follows a clear, linear flow, ensuring each component interacts seamlessly.

Here’s a visual representation of the architecture:

Architecture Overview

Architecture Diagram

The flowchart shows three phases—Setup, Generation, and Output—with each method encapsulated in its own cluster for clarity:

  • Setup: check_model()get_user_input()load_motifs()
  • Generation: generate_hero_journey()
  • Output: save_to_markdown()narrate_story()

Architecture Breakdown

Setup

  • check_model(): Verify or pull the selected LLM, guaranteeing local inference.
  • get_user_input(): Capture and validate the hero’s name and birthdate.
  • load_motifs(): Load the TMI JSON and sample three motifs to theme the narrative.

Generation

  • generate_hero_journey(): For each of the six Hero’s Journey stages, build a context-aware prompt (including name, age, motifs) and invoke the LLM to produce the story segment.

Output

  • save_to_markdown(): Compile metadata and all story segments into a styled Markdown document.
  • narrate_story(): Sequentially synthesize speech for each story part, creating an audio companion to the text.

This modular structure ensures each phase is isolated, testable, and easily customizable for future enhancements.

Function Breakdown

In this section, we will break down each of the functions described before and quickly explain their implementation and usage.

Initialization (__init__)

    def __init__(self):
        """
        Initialize the HeroJourneyGenerator with necessary components.

        Sets up:
        - Faker for random name generation
        - Text-to-speech engine with configured properties
        - Ollama client for AI story generation
        """
        self.faker = Faker()
        self.engine = pyttsx3.init()
        self.engine.setProperty('rate', 150)  # Speed of speech
        self.engine.setProperty('volume', 0.9)  # Volume (0.0 to 1.0)
        self.ollama_client = Client()
        self.model_name = 'llama3.2:1b'

Code Description:

  • Faker: Generates or sanitizes hero names.
  • TTS engine: Preconfigured for offline narration.
  • Ollama client & model: Establish the LLM interface.

Model Verification (check_model)

    def check_model(self):
        """
        Check if the required Ollama model is available and pull it if needed.

        This method:
        1. Checks if the model exists in the local Ollama installation
        2. Pulls the model if it's not found
        3. Shows download progress with a progress bar
        4. Handles any errors during the process
        """
        try:
            # Try to list models to check if Mistral is available
            models = self.ollama_client.list()
            model_names = [model.get('name', '') for model in models.get('models', [])]

            if self.model_name not in model_names:
                print(f"\nModel '{self.model_name}' not found. Pulling it now...")
                print("This may take a few minutes depending on your internet connection.")

                # Create a progress bar
                with tqdm(total=0, desc="Downloading model", unit="B", unit_scale=True, unit_divisor=1024) as pbar:
                    # Start the pull operation
                    pull_operation = self.ollama_client.pull(self.model_name, stream=True)

                    # Update progress bar based on the stream
                    for chunk in pull_operation:
                        if hasattr(chunk, 'status'):
                            # Update progress if we have completed and total values
                            if hasattr(chunk, 'completed') and hasattr(chunk, 'total'):
                                try:
                                    completed = float(chunk.completed)
                                    total = float(chunk.total)
                                    if total > 0:
                                        # Update total if it changes
                                        if pbar.total != total:
                                            pbar.total = total

                                        # Update completed bytes
                                        pbar.update(completed - pbar.n)
                                except (ValueError, TypeError):
                                    pass

                            # Handle completion
                            if chunk.status == 'success':
                                pbar.update(pbar.total - pbar.n)
                                break

                            # Add a small delay to make the progress visible
                            time.sleep(0.1)

                print(f"\nModel '{self.model_name}' has been pulled successfully!")
        except Exception as e:
            print(f"\nError checking/pulling model: {e}")
            print("\nPlease make sure:")
            print("1. Ollama is installed and running")
            print("2. You have an internet connection")
            print("3. You have enough disk space")
            print("\nYou can manually pull the model by running:")
            print(f"ollama pull {self.model_name}")
            sys.exit(1)

This function ensures the required model is available locally, pulling it if necessary to maintain an offline workflow.

It shows a progress bar tqdm of the model download progress.

User Input (get_user_input)

    def get_user_input(self):
        """
        Collect user information for story customization.

        Returns:
            tuple: (name, birthdate) where:
                - name is either user input or a randomly generated name
                - birthdate is a string in YYYY-MM-DD format
        """
        print("\n=== Welcome to the Hero's Journey Story Generator ===")

        # Get user input
        name = input("\nEnter your name (or press Enter for a random name): ").strip()
        if not name:
            name = self.faker.name()
            print(f"Generated name: {name}")

        # Get birthdate from user
        while True:
            birthdate = input("\nEnter your birthdate (YYYY-MM-DD): ").strip()
            try:
                datetime.strptime(birthdate, '%Y-%m-%d')
                break
            except ValueError:
                print("Invalid date format. Please use YYYY-MM-DD.")

        return name, birthdate

Code description:

  • Name: Fallback to Faker when left blank.
  • Birthdate: Loop until a valid ISO format is entered.

Motif Sampling (load_motifs)

    def load_motifs(self):
        """
        Load and select random motifs from the Thompson Motif Index.

        Returns:
            list: A list of randomly selected motif dictionaries from tmi.json
        """
        try:
            # Load motifs from tmi.json
            with open('tmi.json', 'r', encoding='utf-8') as f:
                motifs = json.load(f)
            # Select 3 random motifs
            selected_motifs = random.sample(motifs, 3)
            return selected_motifs
        except FileNotFoundError:
            print("Error: tmi.json file not found!")
            return []

Loads the full motif dataset from the tmi.json file and picks three at random, setting the thematic tone for the narrative.

Generating Story Segments (generate_story_part)

    def generate_story_part(self, prompt):
        """
        Generate a story part using Ollama API.

        Args:
            prompt (str): The prompt to generate the story part

        Returns:
            str: The generated story part text
        """
        try:
            system_prompt = """You are a master storyteller specializing in mythological narratives. 
                Your task is to create engaging, vivid, and meaningful stories that follow the Hero's Journey structure.
                Focus on:
                - Rich, descriptive language that paints vivid imagery
                - Emotional depth and character development
                - Mythological and symbolic elements
                - A balance of action and reflection
                - Maintaining a consistent narrative voice
                - Incorporating the provided motifs naturally into the story

                Write in a style that is both accessible and profound, suitable for a modern audience while maintaining the timeless quality of mythology.
            """
            # Generate story part using Ollama API
            response = self.ollama_client.chat(model=self.model_name, 
                # Set messages
                messages=[
                    {
                        'role': 'system',
                        'content': system_prompt
                    },
                    {
                        'role': 'user',
                        'content': prompt
                    }],
                # Set temperature and number of threads
                options={
                    "temperature": 0.7,
                    "num_thread": os.cpu_count() * 0.5
                })
            return response['message']['content'].strip()
        except Exception as e:
            print(f"Error generating story part: {e}")
            return "Error generating this part of the story."

It sets up a system prompt to instruct the AI to behave as a storyteller and uses the user prompt of the specific part to generate the content.

Additionally, some options, like temperature, should be set to 0.7 to ensure the AI's creativity, and the number of threads to 50% of the available threads to ensure system responsiveness.

Orchestrating the Journey (generate_hero_journey)

    def generate_hero_journey(self, name, birthdate, motifs):
        """
        Generate the complete Hero's Journey story.

        Args:
            name (str): The hero's name
            birthdate (str): The hero's birthdate in YYYY-MM-DD format
            motifs (list): List of motif dictionaries to incorporate

        Returns:
            list: List of tuples containing (title, content) for each part of the journey
        """
        story_parts = []

        # Extract motif descriptions for the prompts
        motif_descriptions = [motif['description'] for motif in motifs]

        # Calculate age for story context
        birth_date = datetime.strptime(birthdate, '%Y-%m-%d')
        current_date = datetime.now()
        age = current_date.year - birth_date.year - ((current_date.month, current_date.day) < (birth_date.month, birth_date.day))

        # Define prompts for each part of the journey
        prompts = {
            "Call to Adventure": f"Create a compelling call to adventure for {name}, who is {age} years old. Born on {birthdate}, their journey begins at a pivotal moment in their life. Incorporate these motifs: {', '.join(motif_descriptions)}. Make it personal and engaging, reflecting their age and life stage.",
            "Supernatural Aid": f"Describe the supernatural aid that comes to {age}-year-old {name}'s assistance. Born on {birthdate}, they receive guidance that resonates with their life experience. Using these motifs: {', '.join(motif_descriptions)}, make it mysterious and powerful, tailored to their age and background.",
            "Road of Trials": f"Narrate the challenging trials that {name}, born on {birthdate}, must face at the age of {age}. These challenges should reflect their life stage and personal growth. Incorporating these motifs: {', '.join(motif_descriptions)}, make it dramatic and transformative.",
            "Apotheosis": f"Describe {name}'s moment of apotheosis or transformation at the age of {age}. Born on {birthdate}, this moment should reflect their accumulated life experience. Using these motifs: {', '.join(motif_descriptions)}, make it profound and meaningful to their personal journey.",
            "The Ultimate Boon": f"Detail the ultimate boon or reward that {age}-year-old {name} obtains. Born on {birthdate}, this reward should be significant to their life stage and personal development. Incorporating these motifs: {', '.join(motif_descriptions)}, make it valuable and meaningful to their journey.",
            "Return": f"Tell the story of {name}'s return and how they use their newfound wisdom at the age of {age}. Born on {birthdate}, their return should reflect their growth and the impact of their journey. Using these motifs: {', '.join(motif_descriptions)}, make it satisfying and complete."
        }

        # Generate each part of the journey
        for title, prompt in prompts.items():
            print(f"\nGenerating {title}...")
            story_part = self.generate_story_part(prompt)
            story_parts.append((title, story_part))

        return story_parts

Automatically contextualizes each Hero’s Journey phase with user data and motifs, then generates the six narrative blocks.

Markdown Export (save_to_markdown)

    def save_to_markdown(self, name, birthdate, motifs, story_parts):
        """
        Save the story to a markdown file.

        Args:
            name (str): The hero's name
            birthdate (str): The hero's birthdate
            motifs (list): List of motif dictionaries used in the story
            story_parts (list): List of tuples containing (title, content) for each part
        """
        with open('myth_story.md', 'w', encoding='utf-8') as f:
            # Write header
            f.write(f"# {name}'s Hero's Journey\n\n")
            f.write(f"**Birthdate:** {birthdate}\n\n")
            f.write("## Selected Motifs\n")
            for motif in motifs:
                f.write(f"- **{motif['motif']}**: {motif['description']}\n")
                if motif['locations']:
                    f.write(f"  - Locations: {', '.join(motif['locations'])}\n")
                if motif['lemmas']:
                    f.write(f"  - Related terms: {', '.join(motif['lemmas'])}\n")
            f.write("\n## The Hero's Journey\n\n")

            # Write story parts
            for title, content in story_parts:
                f.write(f"### {title}\n\n")
                f.write(f"{content}\n\n")

Produces a structured, styled Markdown document, ready for GitHub, blogs, or static site generators.

Text-to-Speech (narrate_story)

    def narrate_story(self, story_parts):
        """
        Narrate the story using text-to-speech.

        Args:
            story_parts (list): List of tuples containing (title, content) for each part
        """
        print("\nNarrating the story...")
        for title, content in story_parts:
            # Narrate each part
            self.engine.say(f"{title}. {content}")
            # Wait for the current part to finish
            self.engine.runAndWait()

Delivers an immersive audio experience, prefacing each segment with its phase title and reading the content with TTS.

Workflow (run)

    def run(self):
        """
        Main execution flow of the story generator.

        This method:
        1. Checks for the required model
        2. Gets user input
        3. Loads motifs
        4. Generates the story
        5. Saves it to markdown
        6. Narrates it using text-to-speech
        """
        # Check for model availability first
        self.check_model()

        # Get user input    
        name, birthdate = self.get_user_input()

        # Load motifs
        motifs = self.load_motifs()
        if not motifs:
            print("Unable to proceed without motifs. Please ensure tmi.json is present.")
            return

        # Generate the story
        print("\nGenerating your personalized Hero's Journey...")
        story_parts = self.generate_hero_journey(name, birthdate, motifs)

        # Save to markdown
        self.save_to_markdown(name, birthdate, motifs, story_parts)
        print("\nStory has been saved to 'myth_story.md'")

        # Narrate the story
        self.narrate_story(story_parts)

Orchestrates the workflow by:

  • Checks for the required model
  • Gets user input
  • Loads motifs
  • Generates the story
  • Saves it to markdown
  • Narrates it using text-to-speech

Running an Example

$ python myth_generator.py

Model 'llama3.2:1b' not found. Pulling it now...
This may take a few minutes depending on your internet connection.
Downloading model: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 485/485 [00:01<00:00, 323B/s]

Model 'llama3.2:1b' has been pulled successfully!

=== Welcome to the Hero's Journey Story Generator ===

Enter your name (or press Enter for a random name): Nuno

Enter your birthdate (YYYY-MM-DD): 1980-01-01

Generating your personalized Hero's Journey...

Generating Call to Adventure...

...

Story has been saved to 'myth_story.md'

Excerpt from myth_story.md :

# Nuno's Hero's Journey

**Birthdate:** 1980-01-01

## Selected Motifs
- **D1394**: Magic object helps hero in trial.
  - Related terms: hero, object, magic, trial
- **M341.1.1.2**: Prophecy: death on seventh day of marriage.
  - Locations: India
  - Related terms: marriage, prophecy, death, seventh, day
- **D1533.2.1**: Box which travels above or below ground.
  - Related terms: box, ground

## The Hero's Journey

### Call to Adventure

Nuno's eyes wandered through the familiar streets of Lisbon, the 1980s nostalgia that lingered within him like a worn velvet cloak. January 1st, 1985, marked the beginning of a new chapter in his life – one filled with promise and uncertainty. As he strolled along the Tagus River, the scent of blooming jasmine wafted through the air, carrying with it whispers of a lifetime yet to unfold.

It was on this fateful day that Nuno's world was turned upside down. A fire had ravaged his family's antique shop, leaving him with more than just destruction – he had lost his sense of purpose. The void within seemed to mirror the emptiness that had crept into his life since his mother's passing a year prior.

You can access the source code here: https://github.com/nunombispo/PersonalMythologyGenerator

Conclusion

Crafting myths has always felt like channeling something timeless.

With the Hero’s Journey Story Generator, you hold the pen—and the power of AI—in your hands.

By dividing the logic into clear, purpose‑driven methods, I’ve aimed to make the code as accessible as the stories it produces.

  • Testability: Tweak or mock each function to suit your workflows.
  • Extensibility: Swap in new motif lists, theme-based prompts, or export formats without breaking the core.
  • Maintainability: Simple, focused methods keep the codebase lean and beginner‑friendly.

I can’t wait to see what legends you’ll create—whether it’s a classroom of budding heroes or a viral video, may your stories live forever.

Follow me on Twitter: https://twitter.com/DevAsService

Follow me on Instagram: https://www.instagram.com/devasservice/

Follow me on TikTok: https://www.tiktok.com/@devasservice

Follow me on YouTube: https://www.youtube.com/@DevAsService