# 2ticketss

2 complementary command-line tools for GitHub issue management
- **00-jira-to-gh-issues**: A Rust tool that converts Jira CSV exports to GitHub issue markdown files compatible with gh-issue-generator. It handles messy CSV data and preserves issue metadata
- **01-gh-issue-generator**: A Rust tool that creates GitHub issues from Markdown files with YAML front matter. It parses structured Markdown, supports batch processing, and integrates with GitHub CLI
This commit is contained in:
2025-04-04 22:32:49 -06:00
parent 3cf9748684
commit f74bab9ed4
16 changed files with 19626 additions and 0 deletions

23
01-gh-issue-generator/.gitignore vendored Normal file
View File

@@ -0,0 +1,23 @@
# Generated by Cargo
/target/
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
Cargo.lock
# Reports generated by the tool
/report/
# These are backup files generated by rustfmt
**/*.rs.bk
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb
# macOS files
.DS_Store
# IDE files
.idea/
.vscode/
*.iml

View File

@@ -0,0 +1,20 @@
[package]
name = "gh-issue-generator"
version = "0.1.0"
edition = "2021"
authors = ["b3"]
description = "Tool to generate GitHub issues from markdown files"
[dependencies]
clap = { version = "4.4", features = ["derive"] }
serde = { version = "1.0", features = ["derive"] }
serde_yaml = "0.9"
chrono = "0.4"
walkdir = "2.4"
regex = "1.10"
thiserror = "1.0"
anyhow = "1.0"
log = "0.4"
env_logger = "0.10"
dirs = "5.0"
toml = "0.8"

View File

@@ -0,0 +1,425 @@
# GitHub Issue Generator
A Rust command-line tool that generates GitHub issues from Markdown files. This tool parses structured Markdown files with YAML front matter and uses the GitHub CLI to create issues.
## Prerequisites
- [GitHub CLI](https://cli.github.com/) installed and authenticated
- [Rust](https://www.rust-lang.org/tools/install) installed (cargo, rustc)
## Installation
### Option 1: Build from source
1. Clone this repository
```bash
git clone https://github.com/bee8333/2ticketss.git
cd 2ticketss/01-gh-issue-generator
```
2. Build the application:
```bash
cargo build --release
```
3. The executable will be in `target/release/gh-issue-generator`
4. Optional: Add to your PATH
```bash
# Linux/macOS
cp target/release/gh-issue-generator ~/.local/bin/
# or add the following to your .bashrc or .zshrc
export PATH="$PATH:/path/to/2ticketss/01-gh-issue-generator/target/release"
```
### Option 2: Install with Cargo
If you have Rust installed, you can install directly with:
```bash
cargo install --path .
```
### GitHub CLI Authentication
Before using this tool, you must authenticate the GitHub CLI:
```bash
# Login to GitHub
gh auth login
# Verify you're authenticated
gh auth status
```
Make sure you have the appropriate scopes (repo for private repositories) during authentication.
## Usage
```bash
gh-issue-generator [OPTIONS] <input_paths>...
```
Arguments:
- `<input_paths>...`: One or more paths to Markdown files or directories containing Markdown files
Options:
- `-r, --repo <owner/repo>`: GitHub repository in format owner/repo (e.g., "octocat/Hello-World")
- `-d, --dry-run`: Run in dry-run mode (don't actually create issues)
- `-v, --verbose`: Increase verbosity level (can be used multiple times: -v, -vv)
- `-c, --config <config>`: Path to config file
### Logging Options
You can control log verbosity in two ways:
1. Using the `--verbose` flag:
- `-v`: Debug level logging
- `-vv`: Trace level logging
2. Using the `RUST_LOG` environment variable:
```bash
# Set global log level
RUST_LOG=debug gh-issue-generator --repo owner/repo issues/
# Set per-module log level
RUST_LOG=gh_issue_generator=trace,walkdir=warn gh-issue-generator --repo owner/repo issues/
```
Examples:
```bash
# Process a single Markdown file
gh-issue-generator --repo myuser/myrepo issues/feature-request.md
# Process all Markdown files in a directory
gh-issue-generator --repo myuser/myrepo issues/
# Process multiple files
gh-issue-generator --repo myuser/myrepo issues/bug-1.md issues/feature-1.md
# Dry run (don't create actual issues)
gh-issue-generator --repo myuser/myrepo --dry-run issues/
# Use a specific config file
gh-issue-generator --config my-config.toml issues/
# Use increased verbosity for troubleshooting
gh-issue-generator --repo myuser/myrepo --verbose issues/
```
## Configuration
The tool supports configuration files to store default settings. Configuration is searched in these locations (in order):
1. Path specified with `--config` option
2. `.gh-issue-generator.toml` in current directory
3. `~/.gh-issue-generator.toml` in home directory
4. `~/.config/gh-issue-generator/config.toml` in XDG config directory
A default configuration file will be created at `~/.config/gh-issue-generator/config.toml` if none exists.
### Configuration Options
```toml
# Default GitHub repository (optional)
# If specified, you can omit --repo command line option
default_repo = "myuser/myrepo"
# Report output directory (default: "report")
report_dir = "report"
```
### Creating or Editing Configuration
You can manually create or edit the configuration file, or run the tool once to create a default configuration automatically.
For example, to set a default repository:
```bash
mkdir -p ~/.config/gh-issue-generator
cat > ~/.config/gh-issue-generator/config.toml << EOF
default_repo = "myuser/myrepo"
report_dir = "github-issues/reports"
EOF
```
## Complete Workflow Examples
### Example 1: Creating Feature Requests
1. Create a feature request Markdown file:
```bash
mkdir -p issues/features
cat > issues/features/search-feature.md << 'EOF'
---
title: Implement advanced search functionality
status: ready
labels:
- enhancement
- search
assignees:
- developer1
milestone: v2.0
---
## Description
We need to add advanced search capabilities to improve user experience.
## Requirements
- [ ] Support fuzzy search matching
- [ ] Allow filtering by multiple criteria
- [ ] Add search history feature
## Technical Considerations
- Consider using Elasticsearch
- Should be optimized for performance
EOF
```
2. Create the GitHub issue:
```bash
gh-issue-generator --repo myuser/myrepo issues/features/search-feature.md
```
### Example 2: Converting from Jira and Batch Creating
1. Convert Jira issues to GitHub format:
```bash
cd ../00-jira-to-gh-issues
./target/release/jira-to-gh-issues --input sprint-tickets.csv --output ../issues
cd ..
```
2. Create all issues at once:
```bash
cd 01-gh-issue-generator
./target/release/gh-issue-generator --repo myuser/myrepo ../issues/20240405123045/batch.md
```
## Markdown Format
Each Markdown file should have YAML front matter at the beginning of the file, followed by the issue description.
The front matter must be enclosed between `---` lines and contain at least a `title` field.
### Front Matter Fields
| Field | Required | Description |
|-------|----------|-------------|
| `title` | Yes | Issue title |
| `status` | No | Issue status: "draft" or "ready" (default: "ready") |
| `labels` | No | Array of labels to apply to the issue |
| `assignees` | No | Array of GitHub usernames to assign the issue to |
| `milestone` | No | Milestone to add the issue to |
| `project` | No | GitHub project to add the issue to |
| `parent` | No | Issue number or URL of a parent issue |
### Issue Body
The content after the front matter becomes the issue body. It can contain any Markdown formatting, including:
- Task lists
- Code blocks
- Images
- Links
- Tables
- etc.
### Example
```markdown
---
title: Add support for OAuth2 authentication
status: ready
labels:
- enhancement
- security
assignees:
- octocat
milestone: v1.0
project: Project Board
parent: 42
---
## Description
We need to add support for OAuth2 authentication to improve security.
## Tasks
- [ ] Research OAuth2 providers
- [ ] Implement OAuth2 flow
- [ ] Add tests
- [ ] Update documentation
## Additional Context
This is needed for compliance with our security requirements.
```
### Draft Issues
Issues marked with `status: draft` will not be created in GitHub. They will be reported as skipped in the summary.
## Batch Files
The tool also supports processing multiple issues from a single file. This is useful for creating related issues in a single operation.
### Batch File Format
Batch files should:
1. Start with any introductory content (optional)
2. Use the delimiter `---ISSUE---` to separate individual issues
3. Each issue section must include its own YAML front matter and body
### Example Batch File
```markdown
# Sprint Planning Issues
This file contains all issues for the upcoming sprint.
---ISSUE---
---
title: Implement login feature
status: ready
labels:
- feature
---
## Description
Add user login functionality...
---ISSUE---
---
title: Update homepage design
status: ready
labels:
- ui
---
## Description
Refresh the homepage design...
---ISSUE---
---
title: Database migration planning
status: draft
---
## Description
Plan the database migration...
```
### Processing Batch Files
Process a batch file the same way as a regular file:
```bash
gh-issue-generator --repo myuser/myrepo sprint-planning.md
```
The tool will create each non-draft issue as a separate GitHub issue.
## Report Generation
After processing all input files, the tool creates a timestamped report directory in `report/YYYY-MM-DD-HH-MM/`. The report contains:
1. A summary.md file with details of all processed issues
2. Copies of all processed Markdown files
- Files for successfully created issues will have a comment with the issue URL
- Files for failed issues will have a comment with the error message
## Parent-Child Relationship
If an issue specifies a `parent` field, the tool will add a comment to the created issue linking it to the parent issue. The parent can be specified as:
- A GitHub issue number (e.g., `42`)
- A full GitHub issue URL (e.g., `https://github.com/owner/repo/issues/42`)
## Troubleshooting
### GitHub CLI Authentication Issues
If you encounter authentication issues:
```bash
# Check your GitHub CLI authentication status
gh auth status
# Re-authenticate if needed
gh auth login
```
### Permission Issues
Ensure your GitHub user has permission to create issues in the target repository.
### Rate Limiting
If you're creating many issues, you might hit GitHub's API rate limits. The tool doesn't currently implement rate limit handling, so you might need to wait before running again.
### Common Errors
| Error | Possible Solution |
|-------|------------------|
| `No such file or directory` | Check the file paths you're providing |
| `Failed to parse YAML front matter` | Ensure your YAML is properly formatted |
| `failed to fetch resource: ...` | Check your network connection and GitHub authentication |
| `Repository not found` | Verify the repository exists and you have access to it |
## Best Practices
- Use `--dry-run` to test your issue files before creating actual issues
- Organize your issue files in a logical directory structure
- Use consistent naming conventions for your issue files
- Include detailed information in your issue descriptions to minimize follow-up questions
## License
MIT
## AI-Assisted Issue Generation
You can use agentic assistants like Claude Projects or Cursor IDE to help generate issue markdown files. Here are template prompts you can use:
### Prompt for Generating a Single Issue
```
Create a GitHub issue markdown file with YAML front matter for the following project feature:
Feature description: [your feature description]
Key requirements:
- [requirement 1]
- [requirement 2]
The issue should include:
- A clear title
- Appropriate labels (choose from: bug, enhancement, documentation, feature, refactor)
- Detailed description
- Acceptance criteria
- Any technical considerations
```
### Prompt for Generating a Batch of Related Issues
```
Create a batch file containing GitHub issues for a sprint focused on [feature area].
Include 3-5 related issues that would be needed to implement this feature area.
Each issue should have appropriate front matter with title, labels, etc.
Make sure the issues are properly separated with the ---ISSUE--- delimiter.
The issues should cover different aspects like:
- Initial implementation
- UI/UX improvements
- Testing
- Documentation
```

View File

@@ -0,0 +1,55 @@
---
title: Implement payment gateway integration
status: draft
labels:
- feature
- payments
- backend
assignees:
- developer3
milestone: v1.1
project: Development Roadmap
---
# Payment Gateway Integration
## Overview
We need to integrate with a payment gateway to handle subscription and one-time payments.
## Requirements
- Support for credit card payments
- Subscription management
- Invoicing
- Webhooks for payment events
- Handling failed payments and retries
## Options to Evaluate
- [ ] Stripe
- [ ] PayPal
- [ ] Braintree
- [ ] Square
## Notes
This issue is currently in draft because we need to:
1. Finalize which payment provider to use
2. Determine pricing tiers
3. Get legal approval for terms of service
## Tasks (Preliminary)
- [ ] Research payment gateways and their fees
- [ ] Create comparison matrix of features
- [ ] Discuss with legal team about compliance requirements
- [ ] Draft initial integration architecture
- [ ] Create test accounts with potential providers
## Questions to Answer
- What are our transaction volume estimates?
- Do we need international payment support?
- What currencies do we need to support?
- Are there specific compliance requirements (PCI-DSS)?

View File

@@ -0,0 +1,67 @@
---
title: Implement user authentication service
status: ready
labels:
- feature
- security
- backend
assignees:
- developer1
- developer2
milestone: v1.0
project: Development Roadmap
parent: 42
---
# User Authentication Service Implementation
## Overview
We need to implement a comprehensive user authentication service that handles registration, login, password reset, and account management.
## Requirements
- Secure password handling with bcrypt
- JWT token generation and validation
- Email verification flow
- Two-factor authentication support
- Rate limiting for login attempts
- Session management
## Tasks
- [ ] Design authentication database schema
- [ ] Implement user registration endpoint
- [ ] Add email verification flow
- [ ] Create login endpoint with JWT token generation
- [ ] Implement password reset functionality
- [ ] Add two-factor authentication
- [ ] Set up rate limiting for login attempts
- [ ] Implement session management
- [ ] Write unit and integration tests
- [ ] Create API documentation
## Technical Notes
```
POST /api/auth/register
{
"email": "user@example.com",
"password": "securePassword",
"name": "User Name"
}
```
Authentication should follow OAuth 2.0 standards where applicable.
## Related Resources
- [JWT Best Practices](https://auth0.com/blog/a-look-at-the-latest-draft-for-jwt-bcp/)
- [OWASP Authentication Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Authentication_Cheat_Sheet.html)
## Acceptance Criteria
1. All authentication endpoints pass security review
2. Password storage follows current best practices
3. Login and registration flows have appropriate rate limiting
4. All user flows are documented in the API docs

View File

@@ -0,0 +1,127 @@
# Batch Issues Example
This file demonstrates how to use our tool with a single batch file containing multiple issues.
To use this approach, save this as a markdown file and run:
```
gh-issue-generator --repo owner/repo batch.md
```
Note: Each issue must be separated by the delimiter `---ISSUE---`
---ISSUE---
---
title: Implement user registration flow
status: ready
labels:
- feature
- frontend
- backend
assignees:
- frontend-dev
- backend-dev
milestone: v1.0
project: User Management
---
## Description
Create a complete user registration flow including email verification.
## Requirements
- Registration form with validation
- Email verification process
- Welcome email
- Database schema updates
## Tasks
- [ ] Design registration form
- [ ] Implement frontend validation
- [ ] Create backend API endpoints
- [ ] Set up email verification system
- [ ] Update user database schema
- [ ] Add welcome email template
---ISSUE---
---
title: Add product search functionality
status: ready
labels:
- feature
- search
milestone: v1.0
project: Product Catalog
---
## Description
Implement search functionality for the product catalog.
## Requirements
- Search by product name, description, and tags
- Autocomplete suggestions
- Filter by category and price range
- Sort results by relevance, price, or rating
## Technical Approach
Use Elasticsearch for the search index and implement a React component for the frontend.
---ISSUE---
---
title: Optimize image loading performance
status: draft
labels:
- performance
- frontend
---
## Description
Optimize image loading to improve page performance and Core Web Vitals metrics.
## Ideas to Consider
- Implement lazy loading for images
- Use responsive images with srcset
- Add image compression pipeline
- Consider using a CDN for image delivery
This is currently in draft because we need to gather performance metrics first.
---ISSUE---
---
title: Update third-party dependencies
status: ready
labels:
- maintenance
- security
assignees:
- devops-team
---
## Description
Update all third-party dependencies to their latest versions to address security vulnerabilities.
## Dependencies to Update
- React and related packages
- Backend frameworks
- Database drivers
- Testing libraries
## Steps
1. Review current dependencies
2. Check for security advisories
3. Create upgrade plan
4. Test upgrades in staging
5. Deploy to production

View File

@@ -0,0 +1,57 @@
---
title: Fix login error with special characters in password
status: ready
labels:
- bug
- security
- critical
assignees:
- backend-dev
milestone: v1.0.1
project: Bug Fixes
parent: 123
---
# Login Error with Special Characters in Password
## Bug Description
Users are unable to log in when their password contains certain special characters (specifically `#`, `&`, and `%`). The login form submits successfully but returns a 400 error.
## Steps to Reproduce
1. Create a user account with a password containing one of the special characters: `#`, `&`, or `%`
2. Log out of the account
3. Attempt to log in with the correct credentials
4. Observe the error message and failed login
## Expected Behavior
Login should succeed with the correct credentials regardless of special characters in the password.
## Actual Behavior
When submitting the login form with credentials containing special characters, the request fails with a 400 Bad Request error. The following error appears in the console:
```
POST https://api.example.com/auth/login 400 (Bad Request)
Error: {"error":"Invalid request parameters"}
```
## Environment
- **Browser**: Chrome 98.0.4758.102, Firefox 97.0.1
- **OS**: Windows 10, macOS Monterey 12.2.1
- **Backend Version**: v1.2.3
## Technical Analysis
Initial investigation suggests that the password is not being properly URL-encoded before being sent to the backend, causing the server to reject the request.
## Possible Fix
Add proper URL encoding to the login form submission or update the backend to handle special characters in the request payload correctly.
## Severity
Critical - Affects user authentication and prevents access to the application.

View File

@@ -0,0 +1,51 @@
---
title: Add dark mode support
status: ready
labels:
- enhancement
- ui
- accessibility
assignees:
- ui-designer
- frontend-dev
milestone: v2.0
project: UI Improvements
---
# Dark Mode Support
## Description
We need to add a dark mode option to our application to improve accessibility and user experience in low-light environments.
## Requirements
- Toggle switch in user settings
- System preference detection (respect user's OS preference)
- Persistent setting (remember user's choice)
- Properly themed components for all UI elements
- Appropriate contrast ratios for accessibility compliance
## Tasks
- [ ] Create dark color palette
- [ ] Add theme toggle component in settings
- [ ] Set up theme context/provider
- [ ] Implement system preference detection
- [ ] Refactor components to use theme variables
- [ ] Add theme persistence in user settings
- [ ] Test across all major browsers and devices
- [ ] Verify accessibility compliance
## Technical Approach
We should use CSS variables and a theme provider context to implement the theming system. This will allow us to switch themes without requiring a page reload.
## Resources
- [WCAG Contrast Guidelines](https://www.w3.org/WAI/WCAG21/Understanding/contrast-minimum.html)
- [Material Design Dark Theme](https://material.io/design/color/dark-theme.html)
## Implementation Notes
The main challenges will be ensuring all third-party components also respect the theme, and making sure we maintain proper contrast ratios in both themes.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,845 @@
use std::path::{Path, PathBuf};
use std::fs::{self, File, create_dir_all};
use std::io::{Write, Read};
use std::process::Command;
use anyhow::{Result, Context};
use chrono::Local;
use clap::Parser;
use dirs::home_dir;
use log::{info, warn, error, debug, trace, LevelFilter};
use regex::Regex;
use serde::{Deserialize, Serialize};
use walkdir::WalkDir;
/// GitHub Issue Generator from Markdown files
///
/// A tool that generates GitHub issues from structured Markdown files.
/// It parses files with YAML front matter to extract metadata like title,
/// labels, assignees, and other issue properties, then uses the GitHub CLI
/// to create issues with the specified attributes.
#[derive(Parser, Debug)]
#[command(
author,
version,
about,
long_about = "A command-line utility that creates GitHub issues from structured Markdown files.
It parses Markdown files with YAML front matter and uses the GitHub CLI to create issues.
The tool supports:
- Individual issue files with metadata like title, labels, assignees
- Batch files containing multiple issues
- Parent-child issue relationships
- GitHub project integration
- Draft issues (skipped during creation)
- Detailed reporting
Requirements:
- GitHub CLI (gh) must be installed and authenticated
- Repository must exist and user must have issue creation permissions"
)]
struct Args {
/// GitHub repository in format owner/repo
///
/// The target GitHub repository where issues will be created.
/// Format must be 'owner/repo' (e.g., 'octocat/Hello-World').
/// This can also be set in the configuration file.
#[arg(short, long)]
repo: Option<String>,
/// Path to markdown file, directory, or multiple file paths
///
/// One or more paths to:
/// - Individual markdown files
/// - Directories containing markdown files
/// - Batch files with multiple issues
///
/// All files must have .md or .markdown extension.
#[arg(required = true)]
input_paths: Vec<PathBuf>,
/// Dry run (don't actually create issues)
///
/// When enabled, the tool will parse files and show what would be created,
/// but won't actually create any issues on GitHub. Useful for testing
/// file formatting and validating inputs before actual creation.
#[arg(short, long)]
dry_run: bool,
/// Verbosity level (can be used multiple times)
///
/// Controls the amount of information displayed during execution:
/// -v: Debug level (detailed information for troubleshooting)
/// -vv: Trace level (very verbose, all operations logged)
///
/// You can also use the RUST_LOG environment variable instead.
#[arg(short, long, action = clap::ArgAction::Count)]
verbose: u8,
/// Path to config file
///
/// Path to a TOML configuration file. If not specified, the tool will
/// search for configuration in standard locations:
/// - .gh-issue-generator.toml in current directory
/// - ~/.gh-issue-generator.toml in home directory
/// - ~/.config/gh-issue-generator/config.toml
#[arg(short = 'c', long)]
config: Option<PathBuf>,
}
#[derive(Debug, Serialize, Deserialize)]
struct Config {
/// Default GitHub repository in format owner/repo
default_repo: Option<String>,
/// Default report output directory
#[serde(default = "default_report_dir")]
report_dir: String,
}
fn default_report_dir() -> String {
"report".to_string()
}
impl Default for Config {
fn default() -> Self {
Self {
default_repo: None,
report_dir: default_report_dir(),
}
}
}
impl Config {
/// Load configuration from file or create default if not exists
fn load(config_path: Option<&PathBuf>) -> Result<Self> {
// If path is provided, try to load from there
if let Some(path) = config_path {
debug!("Loading config from specified path: {}", path.display());
return Self::load_from_file(path);
}
// Try to load from default locations
let config_paths = [
// Current directory
PathBuf::from(".gh-issue-generator.toml"),
// Home directory
home_dir().map(|h| h.join(".gh-issue-generator.toml")).unwrap_or_default(),
// XDG config directory
home_dir().map(|h| h.join(".config/gh-issue-generator/config.toml")).unwrap_or_default(),
];
for path in &config_paths {
if path.exists() {
debug!("Found config file at: {}", path.display());
return Self::load_from_file(path);
}
}
// No config file found, create default in home directory
debug!("No config file found, using defaults");
Ok(Self::default())
}
/// Load configuration from a specific file
fn load_from_file(path: &Path) -> Result<Self> {
let mut file = File::open(path)
.context(format!("Failed to open config file: {}", path.display()))?;
let mut content = String::new();
file.read_to_string(&mut content)
.context("Failed to read config file")?;
toml::from_str(&content)
.context("Failed to parse config file as TOML")
}
/// Save configuration to file
fn save(&self, path: &Path) -> Result<()> {
debug!("Saving config to: {}", path.display());
// Create parent directories if they don't exist
if let Some(parent) = path.parent() {
create_dir_all(parent)?;
}
let content = toml::to_string_pretty(self)
.context("Failed to serialize config to TOML")?;
let mut file = File::create(path)
.context(format!("Failed to create config file: {}", path.display()))?;
file.write_all(content.as_bytes())
.context("Failed to write config file")?;
Ok(())
}
/// Create a default config file if it doesn't exist
fn create_default_if_not_exists() -> Result<()> {
let default_path = home_dir()
.map(|h| h.join(".config/gh-issue-generator/config.toml"))
.ok_or_else(|| anyhow::anyhow!("Failed to determine home directory"))?;
if !default_path.exists() {
debug!("Creating default config file at: {}", default_path.display());
let default_config = Self::default();
default_config.save(&default_path)?;
info!("Created default config file at: {}", default_path.display());
println!("Created default config file at: {}", default_path.display());
}
Ok(())
}
}
/// Issue front matter metadata
#[derive(Debug, Serialize, Deserialize, Clone)]
struct IssueMeta {
/// Issue title
title: String,
/// Status (draft, ready)
#[serde(default = "default_status")]
status: String,
/// GitHub project (number or URL)
#[serde(default)]
project: Option<String>,
/// Issue labels
#[serde(default)]
labels: Vec<String>,
/// Issue assignees
#[serde(default)]
assignees: Vec<String>,
/// Parent issue number or URL
#[serde(default)]
parent: Option<String>,
/// Issue milestone
#[serde(default)]
milestone: Option<String>,
}
fn default_status() -> String {
"ready".to_string()
}
/// Result of issue creation
#[derive(Debug)]
struct IssueResult {
filepath: PathBuf,
filename: String,
meta: IssueMeta,
success: bool,
issue_url: Option<String>,
error: Option<String>,
}
/// Validate repository format (owner/repo)
fn validate_repository_format(repo: &str) -> Result<()> {
debug!("Validating repository format: {}", repo);
// Check if the repository follows the owner/repo format
let repo_regex = Regex::new(r"^[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+$")?;
if !repo_regex.is_match(repo) {
let error_msg = format!("Invalid repository format: '{}'. Expected format: 'owner/repo'", repo);
error!("{}", error_msg);
return Err(anyhow::anyhow!(error_msg));
}
debug!("Repository format is valid");
Ok(())
}
fn main() -> Result<()> {
let args = Args::parse();
// Initialize logger with appropriate level
let log_level = match args.verbose {
0 => LevelFilter::Info,
1 => LevelFilter::Debug,
_ => LevelFilter::Trace,
};
// Initialize the logger, respecting the RUST_LOG env var if set
let mut builder = env_logger::Builder::new();
// If RUST_LOG is set, use that config; otherwise use our default based on -v flags
match std::env::var("RUST_LOG") {
Ok(_) => {
// RUST_LOG is set, initialize with default env_logger behavior
builder.init();
},
Err(_) => {
// RUST_LOG not set, use our verbosity flag
builder
.filter_level(log_level)
.format_timestamp(Some(env_logger::fmt::TimestampPrecision::Seconds))
.init();
// Log a hint about RUST_LOG for users who use -v
if args.verbose > 0 {
debug!("Tip: You can also control log level with the RUST_LOG environment variable");
}
}
};
info!("Starting GitHub Issue Generator");
// Load configuration
let config = Config::load(args.config.as_ref())?;
debug!("Loaded configuration: {:?}", config);
// Create default config if it doesn't exist (only in normal operation)
if args.verbose == 0 {
if let Err(e) = Config::create_default_if_not_exists() {
warn!("Failed to create default config file: {}", e);
}
}
// Determine the repository to use (command line takes precedence over config)
let repo = match (args.repo, &config.default_repo) {
(Some(r), _) => r,
(None, Some(r)) => {
info!("Using repository from config: {}", r);
r.clone()
},
(None, None) => {
let error_msg = "No repository specified. Use --repo option or set default_repo in config file.";
error!("{}", error_msg);
return Err(anyhow::anyhow!(error_msg));
}
};
debug!("Using repository: {}", repo);
debug!("Dry run: {}", args.dry_run);
debug!("Input paths: {:?}", args.input_paths);
// Validate repository format
validate_repository_format(&repo)?;
// Check if gh CLI is installed
check_gh_cli()?;
// Get list of all files to process
let files = collect_markdown_files(&args.input_paths)?;
info!("Found {} markdown files to process", files.len());
// Process each file
let mut results = Vec::new();
for file in files {
match process_file(&file, &repo, args.dry_run) {
Ok(mut file_results) => results.append(&mut file_results),
Err(e) => {
error!("Error processing file {}: {}", file.display(), e);
eprintln!("Error processing file {}: {}", file.display(), e);
}
}
}
// Create report directory
let timestamp = Local::now().format("%Y-%m-%d-%H-%M").to_string();
let report_dir = PathBuf::from(format!("{}/{}", config.report_dir, timestamp));
create_dir_all(&report_dir)?;
debug!("Created report directory: {}", report_dir.display());
// Write report
write_report(&report_dir, &results)?;
// Print summary
let success_count = results.iter().filter(|r| r.success).count();
info!("Summary: Successfully created {}/{} issues", success_count, results.len());
println!("\nSummary: Successfully created {}/{} issues", success_count, results.len());
println!("Report saved to {}", report_dir.display());
Ok(())
}
/// Check if GitHub CLI is installed and authenticated
fn check_gh_cli() -> Result<()> {
debug!("Checking if GitHub CLI is installed and authenticated");
let output = Command::new("gh")
.arg("--version")
.output()
.context("Failed to execute 'gh'. Is GitHub CLI installed?")?;
if !output.status.success() {
let error_msg = "GitHub CLI check failed. Please install the GitHub CLI ('gh') and authenticate.";
error!("{}", error_msg);
anyhow::bail!(error_msg);
}
debug!("GitHub CLI is available");
Ok(())
}
/// Collect all markdown files from the provided paths
fn collect_markdown_files(paths: &[PathBuf]) -> Result<Vec<PathBuf>> {
debug!("Collecting markdown files from input paths");
let mut files = Vec::new();
for path in paths {
if path.is_file() && is_markdown_file(path) {
debug!("Adding file: {}", path.display());
files.push(path.clone());
} else if path.is_dir() {
debug!("Scanning directory: {}", path.display());
for entry in WalkDir::new(path).into_iter().filter_map(|e| e.ok()) {
let entry_path = entry.path();
if entry_path.is_file() && is_markdown_file(entry_path) {
trace!("Found markdown file: {}", entry_path.display());
files.push(entry_path.to_path_buf());
}
}
}
}
Ok(files)
}
/// Check if file is a markdown file
fn is_markdown_file(path: &Path) -> bool {
path.extension()
.map(|ext| ext.eq_ignore_ascii_case("md") || ext.eq_ignore_ascii_case("markdown"))
.unwrap_or(false)
}
/// Process a single markdown file, which may contain multiple issues
fn process_file(filepath: &Path, repo: &str, dry_run: bool) -> Result<Vec<IssueResult>> {
info!("Processing file: {}", filepath.display());
// Read file content
let content = fs::read_to_string(filepath)?;
debug!("File read successfully, content length: {} bytes", content.len());
// Check if this is a batch file with multiple issues
if content.contains("---ISSUE---") {
process_batch_file(filepath, &content, repo, dry_run)
} else {
// Process as a single issue file
debug!("Processing as single issue file");
match process_single_issue(filepath, &content, repo, dry_run) {
Ok(result) => Ok(vec![result]),
Err(e) => {
error!("Failed to process single issue file: {}", e);
Err(e)
}
}
}
}
/// Process a batch file containing multiple issues
fn process_batch_file(filepath: &Path, content: &str, repo: &str, dry_run: bool) -> Result<Vec<IssueResult>> {
info!("Processing as batch file with multiple issues");
// Split content by issue delimiter
let filename = filepath.file_name().unwrap().to_string_lossy().to_string();
let issues: Vec<&str> = content.split("---ISSUE---").skip(1).collect();
info!("Found {} issues in batch file", issues.len());
let mut results = Vec::new();
// Process each issue
for (index, issue_content) in issues.iter().enumerate() {
let issue_filename = format!("{}-issue-{}", filename, index + 1);
info!("Processing issue {}/{}: {}", index + 1, issues.len(), issue_filename);
// Parse issue content
match parse_markdown_with_frontmatter(issue_content) {
Ok((meta, body)) => {
// Skip draft issues
if meta.status.to_lowercase() == "draft" {
info!("Skipping draft issue: {}", meta.title);
println!(" Skipping draft issue: {}", meta.title);
results.push(IssueResult {
filepath: filepath.to_path_buf(),
filename: issue_filename,
meta,
success: false,
issue_url: None,
error: Some("Issue marked as draft".to_string()),
});
continue;
}
// Create GitHub issue
let result = if dry_run {
info!("[DRY RUN] Would create issue: {}", meta.title);
println!(" [DRY RUN] Would create issue: {}", meta.title);
IssueResult {
filepath: filepath.to_path_buf(),
filename: issue_filename,
meta,
success: true,
issue_url: Some("https://github.com/dry-run/issue/1".to_string()),
error: None,
}
} else {
create_github_issue(filepath, repo, &meta, &body, issue_filename)?
};
results.push(result);
},
Err(e) => {
error!("Error parsing issue {}: {}", index + 1, e);
eprintln!(" Error parsing issue {}: {}", index + 1, e);
results.push(IssueResult {
filepath: filepath.to_path_buf(),
filename: issue_filename,
meta: IssueMeta {
title: format!("Failed to parse issue #{}", index + 1),
status: "error".to_string(),
project: None,
labels: Vec::new(),
assignees: Vec::new(),
parent: None,
milestone: None,
},
success: false,
issue_url: None,
error: Some(format!("Failed to parse issue: {}", e)),
});
}
}
}
Ok(results)
}
/// Process a single issue from a markdown file
fn process_single_issue(filepath: &Path, content: &str, repo: &str, dry_run: bool) -> Result<IssueResult> {
// Parse front matter and content
let (meta, body) = parse_markdown_with_frontmatter(content)?;
let filename = filepath.file_name().unwrap().to_string_lossy().to_string();
// Skip draft issues
if meta.status.to_lowercase() == "draft" {
println!(" Skipping draft issue: {}", meta.title);
return Ok(IssueResult {
filepath: filepath.to_path_buf(),
filename,
meta,
success: false,
issue_url: None,
error: Some("Issue marked as draft".to_string()),
});
}
// Create GitHub issue
let result = if dry_run {
println!(" [DRY RUN] Would create issue: {}", meta.title);
IssueResult {
filepath: filepath.to_path_buf(),
filename,
meta,
success: true,
issue_url: Some("https://github.com/dry-run/issue/1".to_string()),
error: None,
}
} else {
create_github_issue(filepath, repo, &meta, &body, filename)?
};
Ok(result)
}
/// Parse markdown file with YAML front matter
fn parse_markdown_with_frontmatter(content: &str) -> Result<(IssueMeta, String)> {
// Front matter should be at the start of the file between --- delimiters
// Use a more lenient regex that allows whitespace before the start
let front_matter_regex = Regex::new(r"(?s)\s*^---\s*\n(.*?)\n---\s*\n(.*)")?;
if let Some(captures) = front_matter_regex.captures(content.trim()) {
let front_matter = captures.get(1).unwrap().as_str();
let body = captures.get(2).unwrap().as_str().trim().to_string();
// Parse YAML front matter
let meta: IssueMeta = serde_yaml::from_str(front_matter)
.context("Failed to parse YAML front matter")?;
Ok((meta, body))
} else {
anyhow::bail!("No valid front matter found in markdown file")
}
}
/// Create GitHub issue using gh CLI
fn create_github_issue(filepath: &Path, repo: &str, meta: &IssueMeta, body: &str, filename: String) -> Result<IssueResult> {
// Build gh issue create command
let mut cmd = Command::new("gh");
cmd.arg("issue")
.arg("create")
.arg("--repo")
.arg(repo)
.arg("--title")
.arg(&meta.title)
.arg("--body")
.arg(body);
// Add labels if present
if !meta.labels.is_empty() {
cmd.arg("--label");
let labels = meta.labels.join(",");
cmd.arg(labels);
}
// Add assignees if present
if !meta.assignees.is_empty() {
cmd.arg("--assignee");
let assignees = meta.assignees.join(",");
cmd.arg(assignees);
}
// Add milestone if present
if let Some(milestone) = &meta.milestone {
cmd.arg("--milestone");
cmd.arg(milestone);
}
// Add project if present
if let Some(project) = &meta.project {
cmd.arg("--project");
cmd.arg(project);
}
// Execute command
println!(" Creating issue: {}", meta.title);
let output = cmd.output()?;
// Process result
if output.status.success() {
let issue_url = String::from_utf8(output.stdout)?.trim().to_string();
println!(" Created issue: {}", issue_url);
// Link to parent if specified
if let Some(parent) = &meta.parent {
link_to_parent_issue(repo, &issue_url, parent)?;
}
Ok(IssueResult {
filepath: filepath.to_path_buf(),
filename,
meta: meta.clone(),
success: true,
issue_url: Some(issue_url),
error: None,
})
} else {
let error = String::from_utf8(output.stderr)?.trim().to_string();
eprintln!(" Failed to create issue: {}", error);
Ok(IssueResult {
filepath: filepath.to_path_buf(),
filename,
meta: meta.clone(),
success: false,
issue_url: None,
error: Some(error),
})
}
}
/// Link issue to parent issue using gh cli
fn link_to_parent_issue(repo: &str, issue_url: &str, parent: &str) -> Result<()> {
// Extract issue number from URL
let issue_number = extract_issue_number(issue_url)?;
let parent_issue = if parent.contains('/') {
parent.to_string()
} else {
format!("#{}", parent)
};
// Add comment linking to parent
let comment = format!("Child of {}", parent_issue);
let output = Command::new("gh")
.arg("issue")
.arg("comment")
.arg("--repo")
.arg(repo)
.arg(issue_number)
.arg("--body")
.arg(comment)
.output()?;
if !output.status.success() {
eprintln!(" Failed to link to parent issue: {}", String::from_utf8(output.stderr)?);
}
Ok(())
}
/// Extract issue number from GitHub issue URL
fn extract_issue_number(url: &str) -> Result<String> {
let re = Regex::new(r"/issues/(\d+)$")?;
if let Some(captures) = re.captures(url) {
Ok(captures.get(1).unwrap().as_str().to_string())
} else {
Ok(url.trim_start_matches('#').to_string())
}
}
/// Write report of processed files
fn write_report(report_dir: &Path, results: &[IssueResult]) -> Result<()> {
// Create summary file
let mut summary = File::create(report_dir.join("summary.md"))?;
writeln!(summary, "# GitHub Issue Creation Report")?;
writeln!(summary, "\nGenerated on: {}\n", Local::now().format("%Y-%m-%d %H:%M:%S"))?;
writeln!(summary, "## Summary")?;
writeln!(summary, "- Total issues processed: {}", results.len())?;
writeln!(summary, "- Successfully created: {}", results.iter().filter(|r| r.success).count())?;
writeln!(summary, "- Failed: {}", results.iter().filter(|r| !r.success).count())?;
writeln!(summary, "\n## Details")?;
for result in results {
let status = if result.success { "" } else { "" };
writeln!(summary, "### {} {}", status, result.meta.title)?;
writeln!(summary, "- File: {} ({})", result.filepath.display(), result.filename)?;
if let Some(url) = &result.issue_url {
writeln!(summary, "- Issue: {}", url)?;
}
if let Some(error) = &result.error {
writeln!(summary, "- Error: {}", error)?;
}
writeln!(summary)?;
}
// Copy all processed files with results
write_individual_reports(report_dir, results)?;
// Copy batch files with their content
write_batch_reports(report_dir, results)?;
Ok(())
}
/// Write individual report files for each processed issue
fn write_individual_reports(report_dir: &Path, results: &[IssueResult]) -> Result<()> {
// Group results by filepath
let mut filepath_map: std::collections::HashMap<String, Vec<&IssueResult>> = std::collections::HashMap::new();
for result in results {
let path_str = result.filepath.to_string_lossy().to_string();
filepath_map.entry(path_str).or_default().push(result);
}
// Process each filepath
for (path_str, file_results) in filepath_map {
let _path = Path::new(&path_str);
// Skip batch files - they'll be handled separately
if file_results.len() > 1 && file_results[0].filepath == file_results[1].filepath {
continue;
}
// Single issue file
if let Some(result) = file_results.first() {
let dest_path = report_dir.join(&result.filename);
// Read original content
let mut content = fs::read_to_string(&result.filepath)?;
// Add result information
if !result.success {
content = format!("---\n# FAILED: {}\n---\n\n{}",
result.error.as_deref().unwrap_or("Unknown error"),
content);
} else if let Some(url) = &result.issue_url {
content = format!("---\n# CREATED: {}\n---\n\n{}", url, content);
}
// Write to report directory
fs::write(dest_path, content)?;
}
}
Ok(())
}
/// Write batch reports for files containing multiple issues
fn write_batch_reports(report_dir: &Path, results: &[IssueResult]) -> Result<()> {
// Group results by filepath
let mut filepath_map: std::collections::HashMap<String, Vec<&IssueResult>> = std::collections::HashMap::new();
for result in results {
let path_str = result.filepath.to_string_lossy().to_string();
filepath_map.entry(path_str).or_default().push(result);
}
// Process each filepath with multiple results (batch files)
for (path_str, file_results) in filepath_map {
let path = Path::new(&path_str);
// Only process batch files (files with multiple issues)
if file_results.len() > 1 && file_results[0].filepath == file_results[1].filepath {
let filename = path.file_name().unwrap().to_string_lossy().to_string();
let dest_path = report_dir.join(filename);
// Read original content
let content = fs::read_to_string(path)?;
// Split by issue delimiter
let mut parts: Vec<&str> = content.split("---ISSUE---").collect();
// First part is the file header/intro
let header = parts.remove(0);
// Create new content with results
let mut new_content = header.to_string();
for (part, result) in parts.iter().zip(file_results.iter()) {
let issue_marker = "\n---ISSUE---\n";
let status_comment = if !result.success {
format!("# FAILED: {}\n", result.error.as_deref().unwrap_or("Unknown error"))
} else if let Some(url) = &result.issue_url {
format!("# CREATED: {}\n", url)
} else {
String::new()
};
new_content.push_str(issue_marker);
new_content.push_str(&status_comment);
new_content.push_str(part);
}
// Write to report directory
fs::write(dest_path, new_content)?;
// Also create individual files for each issue in the batch
for result in &file_results {
// Parse front matter and content from the original file
let original_content = fs::read_to_string(&result.filepath)?;
let parts: Vec<&str> = original_content.split("---ISSUE---").skip(1).collect();
if let Some(part) = parts.get(file_results.iter().position(|r| r.filename == result.filename).unwrap_or(0)) {
let dest_path = report_dir.join(&result.filename);
// Add result information
let content = if !result.success {
format!("---\n# FAILED: {}\n---\n\n{}",
result.error.as_deref().unwrap_or("Unknown error"),
part)
} else if let Some(url) = &result.issue_url {
format!("---\n# CREATED: {}\n---\n\n{}", url, part)
} else {
part.to_string()
};
// Write to report directory
fs::write(dest_path, content)?;
}
}
}
}
Ok(())
}