33 Commits

Author SHA1 Message Date
tnypxl
61dde38773 docs: update README to reflect current functionality (#6) 2025-11-27 11:42:19 -06:00
tnypxl
9495ddd788 fix: resolve nil logger panic and CLI URL processing (#5)
- Initialize logger before Playwright to prevent nil pointer dereference
- Set AllowedPaths for CLI URLs so they get processed by scraper

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 11:04:08 -06:00
tnypxl
eb3b611864 Merge branch 'claude/fix-bugs-and-gaps-01DvJSzruQh49DU6XK5AykQU' (#4) 2025-11-27 10:50:03 -06:00
tnypxl
877a7876c0 fix: resolve 5 bugs identified in code review (#3) 2025-11-27 09:58:09 -06:00
tnypxl
7569aff6ec Add CLAUDE.md with project guidance for Claude Code (#2) 2025-11-27 09:29:10 -06:00
Arik Jones
9341a51d09 fix multi-file output 2024-12-06 17:02:31 -06:00
Arik Jones
9e9ac903e4 remove maxdepth from tests 2024-12-06 15:19:12 -06:00
Arik Jones
645626f763 remove maxdepth from tests 2024-12-06 15:17:33 -06:00
tnypxl
02e39baf38 flatten scrape config to 'sites:'
* flatten scrape config to 'sites:'. Update unit tests and readme.
* remove check for file_extensions configuration. 
* show progress indication after 5 seconds.
* add documentation to functions
* fix: remove MaxDepth and link extraction functionality
* fix: Remove MaxDepth references from cmd/web.go
2024-10-14 16:09:58 -05:00
333b9a366c fix: Resolve playwright function deprecations and io/ioutil function deprecations. 2024-09-24 15:13:36 -05:00
Arik Jones (aider)
1869dae89a docs: update configuration section in README.md 2024-09-22 18:36:17 -05:00
Arik Jones (aider)
d3ff7cb862 docs: Update README.md CLI flag documentation 2024-09-22 18:33:24 -05:00
Arik Jones (aider)
ea410e4abb feat: Update README.md to reflect recent changes in functionality 2024-09-22 18:31:06 -05:00
Arik Jones (aider)
7d8e25b1ad docs: Add CHANGELOG.md with v0.0.3 release notes 2024-09-22 18:20:25 -05:00
Arik Jones
691832e282 fix: Update expectation 2024-09-22 18:18:03 -05:00
Arik Jones (aider)
31e0fa5ea4 fix: Remove redeclaration of cfg variable in cmd/root.go 2024-09-22 17:07:57 -05:00
Arik Jones (aider)
71f63ddaa8 fix: resolve undefined config variable in cmd/files.go 2024-09-22 17:07:32 -05:00
Arik Jones (aider)
574800c241 fix: Update runRollup function to accept config parameter 2024-09-22 17:06:18 -05:00
Arik Jones (aider)
d5a94f5468 fix: remove indentation while preserving HTML structure in ExtractContentWithCSS 2024-09-22 17:00:16 -05:00
Arik Jones (aider)
59994c085c fix: improve file ignore logic and preserve newlines in extracted content 2024-09-22 16:58:53 -05:00
Arik Jones (aider)
396f092d50 fix: improve file ignore pattern matching for nested directories 2024-09-22 16:58:22 -05:00
Arik Jones (aider)
274ef7ea79 test: enhance and expand test coverage for file operations 2024-09-22 16:56:52 -05:00
Arik Jones
a55e8df02a refactor: improve error handling and variable naming in TestRunRollup 2024-09-22 16:56:51 -05:00
Arik Jones (aider)
364b185269 fix: resolve test failures in TestRunRollup, TestExtractContentWithCSS, and TestExtractLinks 2024-09-21 16:04:20 -05:00
Arik Jones (aider)
952c2dda02 refactor: update browser initialization in scraper tests 2024-09-21 16:01:51 -05:00
Arik Jones (aider)
de84d68b4c test: initialize browser before running ExtractLinks test 2024-09-21 16:01:08 -05:00
Arik Jones (aider)
e5d4c514a7 fix: resolve build errors in test files 2024-09-21 15:59:39 -05:00
Arik Jones (aider)
6ff44f81bb fix: resolve nil pointer dereference in ExtractContentWithCSS test 2024-09-21 15:59:08 -05:00
Arik Jones (aider)
2fd411ce65 test: add debugging info and fix reflect import 2024-09-21 15:57:05 -05:00
Arik Jones
73116e8d82 Fix logging and other issues from preventing scraping 2024-09-21 15:54:33 -05:00
5482621d99 fix: Use preferred fmt.Fprintf funcion 2024-09-20 13:48:28 -05:00
3788a08b00 fix: Remove unused args in getDefaultFilename(), use preferred fmt.Fprintf funcion 2024-09-20 13:47:52 -05:00
8ba54001ce cleanup: Ran go mod tidy to clear out an unused dep. 2024-09-20 13:41:51 -05:00
16 changed files with 1516 additions and 343 deletions

53
CLAUDE.md Normal file
View File

@@ -0,0 +1,53 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Build and Run Commands
```bash
# Build the binary
go build -o rollup .
# Run directly
go run main.go [command]
# Run tests
go test ./...
# Run a single test
go test -run TestName ./path/to/package
```
## Project Overview
Rollup is a Go CLI tool that aggregates text-based files and webpages into markdown files. It has three main commands:
- `files` - Rolls up local files into a single markdown file
- `web` - Scrapes webpages and converts to markdown using Playwright
- `generate` - Creates a default rollup.yml config file
## Architecture
**Entry Point**: `main.go` initializes Playwright browser and loads config before executing commands via Cobra.
**Command Layer** (`cmd/`):
- `root.go` - Cobra root command with global flags (--config, --verbose)
- `files.go` - File aggregation with glob pattern matching for ignore/codegen detection
- `web.go` - Web scraping orchestration, converts config site definitions to scraper configs
- `generate.go` - Scans directory for text file types and generates rollup.yml
**Internal Packages**:
- `internal/config` - YAML config loading and validation. Defines `Config`, `SiteConfig`, `PathOverride` structs
- `internal/scraper` - Playwright-based web scraping with rate limiting, HTML-to-markdown conversion via goquery and html-to-markdown library
**Key Dependencies**:
- `spf13/cobra` - CLI framework
- `playwright-go` - Browser automation for web scraping
- `PuerkitoBio/goquery` - HTML parsing and CSS selector extraction
- `JohannesKaufmann/html-to-markdown` - HTML to markdown conversion
## Configuration
The tool reads from `rollup.yml` by default. Key config fields:
- `file_extensions` - File types to include in rollup
- `ignore_paths` / `code_generated_paths` - Glob patterns for exclusion
- `sites` - Web scraping targets with CSS selectors, path filtering, rate limiting

234
README.md
View File

@@ -1,119 +1,225 @@
# Rollup
Rollup aggregates the contents of text-based files and webpages into a markdown file.
Rollup aggregates the contents of text-based files and webpages into markdown files.
## Features
- File type filtering
- Ignore patterns for excluding files
- Support for code-generated file detection
- Advanced web scraping functionality
- Verbose logging option for detailed output
- Exclusionary CSS selectors for web scraping
- Support for multiple URLs in web scraping
- Configurable output format for web scraping (single file or separate files)
- Configuration file support (YAML)
- Generation of default configuration file
- **File aggregation**: Combine multiple source files into a single markdown document
- **File type filtering**: Include only specific file extensions
- **Ignore patterns**: Exclude files/directories using glob patterns
- **Code-generated file detection**: Mark auto-generated files as read-only in output
- **Web scraping**: Scrape webpage content using Playwright browser automation
- **HTML to Markdown conversion**: Automatically converts scraped HTML to clean markdown
- **CSS selectors**: Extract specific content sections or exclude unwanted elements
- **Path-based overrides**: Configure different selectors for specific URL paths
- **Rate limiting**: Configurable requests per second and burst limits for web scraping
- **Output modes**: Single combined file or separate files per source
- **Verbose logging**: Detailed operation insights for debugging
- **YAML configuration**: Flexible configuration file support
## Installation
To install Rollup, make sure you have Go installed on your system, then run:
Ensure you have Go 1.21+ installed, then run:
```bash
go get github.com/tnypxl/rollup
go install github.com/tnypxl/rollup@latest
```
Or build from source:
```bash
git clone https://github.com/tnypxl/rollup.git
cd rollup
go build -o rollup .
```
## Usage
Basic usage:
```bash
rollup [command] [flags]
```
### Commands
- `rollup files`: Rollup files into a single Markdown file
- `rollup web`: Scrape main content from webpages and convert to Markdown
- `rollup generate`: Generate a rollup.yml config file
| Command | Description |
|---------|-------------|
| `files` | Aggregate local files into a single markdown file |
| `web` | Scrape webpages and convert to markdown |
| `generate` | Generate a default rollup.yml config file |
### Flags for `files` command
- `--path, -p`: Path to the project directory (default: current directory)
- `--types, -t`: Comma-separated list of file extensions to include (default: .go,.md,.txt)
- `--codegen, -g`: Comma-separated list of glob patterns for code-generated files
- `--ignore, -i`: Comma-separated list of glob patterns for files to ignore
| Flag | Short | Default | Description |
|------|-------|---------|-------------|
| `--path` | `-p` | `.` | Path to the project directory |
| `--types` | `-t` | `go,md,txt` | Comma-separated list of file extensions (without dots) |
| `--codegen` | `-g` | | Glob patterns for code-generated files |
| `--ignore` | `-i` | | Glob patterns for files to ignore |
### Flags for `web` command
- `--urls, -u`: URLs of the webpages to scrape (comma-separated)
- `--output, -o`: Output type: 'single' for one file, 'separate' for multiple files (default: single)
- `--depth, -d`: Depth of link traversal (default: 0, only scrape the given URLs)
- `--css`: CSS selector to extract specific content
- `--exclude`: CSS selectors to exclude from the extracted content (comma-separated)
| Flag | Short | Description |
|------|-------|-------------|
| `--urls` | `-u` | URLs of webpages to scrape (comma-separated) |
| `--output` | `-o` | Output type: `single` or `separate` |
| `--css` | | CSS selector to extract specific content |
| `--exclude` | | CSS selectors to exclude (comma-separated) |
### Global flags
- `--config, -f`: Path to the configuration file (default: rollup.yml in the current directory)
- `--verbose, -v`: Enable verbose logging
| Flag | Short | Description |
|------|-------|-------------|
| `--config` | `-f` | Path to config file (default: `rollup.yml`) |
| `--verbose` | `-v` | Enable verbose logging |
## Configuration
Rollup can be configured using a YAML file. By default, it looks for `rollup.yml` in the current directory. You can specify a different configuration file using the `--config` flag.
Rollup reads from `rollup.yml` by default. Use `--config` to specify a different file.
Example `rollup.yml`:
### Configuration Options
```yaml
file_types:
# File extensions to include (without leading dots)
file_extensions:
- go
- md
ignore:
- js
# Glob patterns for paths to ignore
ignore_paths:
- node_modules/**
- vendor/**
- .git/**
code_generated:
- **/generated/**
scrape:
urls:
- url: https://example.com
css_locator: .content
exclude_selectors:
- .ads
- .navigation
output_alias: example
output_type: single
# Glob patterns for code-generated files (marked as read-only in output)
code_generated_paths:
- "**/*.pb.go"
- "**/generated/**"
# Web scraping site configurations
sites:
- base_url: https://example.com
css_locator: .main-content
exclude_selectors:
- .ads
- .navigation
- footer
allowed_paths:
- /docs
- /blog
exclude_paths:
- /admin
file_name_prefix: example-docs
path_overrides:
- path: /special-page
css_locator: .special-content
exclude_selectors:
- .special-ads
# Output type for web scraping: 'single' or 'separate'
output_type: single
# Rate limiting for web requests
requests_per_second: 1.0
burst_limit: 3
```
### Configuration Reference
| Field | Type | Description |
|-------|------|-------------|
| `file_extensions` | list | File extensions to include in file rollup |
| `ignore_paths` | list | Glob patterns for files/directories to skip |
| `code_generated_paths` | list | Glob patterns for auto-generated files |
| `sites` | list | Web scraping target configurations |
| `output_type` | string | `single` (one file) or `separate` (multiple files) |
| `requests_per_second` | float | Rate limit for web requests (default: 1.0) |
| `burst_limit` | int | Maximum burst size for rate limiting (default: 3) |
#### Site Configuration
| Field | Type | Description |
|-------|------|-------------|
| `base_url` | string | Starting URL for scraping (required) |
| `css_locator` | string | CSS selector for content extraction |
| `exclude_selectors` | list | CSS selectors for content to exclude |
| `allowed_paths` | list | URL paths allowed for scraping |
| `exclude_paths` | list | URL paths to skip |
| `file_name_prefix` | string | Prefix for output file names |
| `path_overrides` | list | Path-specific selector overrides |
## Examples
1. Rollup files with default configuration:
### File Aggregation
```bash
rollup files
```
```bash
# Rollup files using config file
rollup files
2. Web scraping with multiple URLs:
# Specify file types and ignore patterns
rollup files --types=go,js,ts --ignore="vendor/**,*_test.go"
```bash
rollup web --urls=https://example.com,https://another-example.com
```
# Rollup a specific directory
rollup files --path=/path/to/project
```
3. Generate a default configuration file:
### Web Scraping
```bash
rollup generate
```
```bash
# Scrape URLs from command line
rollup web --urls=https://example.com/docs
4. Use a custom configuration file:
# Scrape multiple URLs
rollup web --urls=https://example.com,https://another.com
```bash
rollup files --config=my-config.yml
```
# Extract specific content with CSS selector
rollup web --urls=https://example.com --css=".article-content"
5. Web scraping with separate output files:
```bash
rollup web --urls=https://example.com,https://another-example.com --output=separate
```
# Exclude elements from scraped content
rollup web --urls=https://example.com --css=".content" --exclude=".ads,.sidebar"
# Output to separate files
rollup web --urls=https://example.com --output=separate
```
### Configuration Generation
```bash
# Generate rollup.yml based on files in current directory
rollup generate
```
### Using Custom Config
```bash
rollup files --config=my-config.yml
rollup web --config=my-config.yml
```
## Output
### File Rollup Output
The `files` command generates a markdown file named `<project-name>-<timestamp>.rollup.md` containing all matched files:
```markdown
# File: src/main.go
```go
package main
// ... file contents
```
# File: docs/README.md (Code-generated, Read-only)
```md
// ... file contents
```
```
### Web Rollup Output
The `web` command generates markdown files from scraped content, with filenames based on the page title or URL.
## Contributing

View File

@@ -8,8 +8,11 @@ import (
"time"
"github.com/spf13/cobra"
"github.com/tnypxl/rollup/internal/config"
)
var cfg *config.Config
var (
path string
fileTypes string
@@ -24,13 +27,13 @@ var filesCmd = &cobra.Command{
in a given project, current path or a custom path, to a single timestamped markdown file
whose name is <project-directory-name>-rollup-<timestamp>.md.`,
RunE: func(cmd *cobra.Command, args []string) error {
return runRollup()
return runRollup(cfg)
},
}
func init() {
filesCmd.Flags().StringVarP(&path, "path", "p", ".", "Path to the project directory")
filesCmd.Flags().StringVarP(&fileTypes, "types", "t", ".go,.md,.txt", "Comma-separated list of file extensions to include")
filesCmd.Flags().StringVarP(&fileTypes, "types", "t", "go,md,txt", "Comma-separated list of file extensions to include (without leading dot)")
filesCmd.Flags().StringVarP(&codeGenPatterns, "codegen", "g", "", "Comma-separated list of glob patterns for code-generated files")
filesCmd.Flags().StringVarP(&ignorePatterns, "ignore", "i", "", "Comma-separated list of glob patterns for files to ignore")
}
@@ -87,30 +90,38 @@ func isIgnored(filePath string, patterns []string) bool {
return true
}
} else {
matched, err := filepath.Match(pattern, filepath.Base(filePath))
if err == nil && matched {
// Check if the pattern matches the full path or any part of it
if matched, _ := filepath.Match(pattern, filePath); matched {
return true
}
pathParts := strings.Split(filePath, string(os.PathSeparator))
for i := range pathParts {
partialPath := filepath.Join(pathParts[:i+1]...)
if matched, _ := filepath.Match(pattern, partialPath); matched {
return true
}
}
}
}
return false
}
func runRollup() error {
func runRollup(cfg *config.Config) error {
// Use config if available, otherwise use command-line flags
var types, codeGenList, ignoreList []string
if cfg != nil && len(cfg.FileTypes) > 0 {
types = cfg.FileTypes
var types []string
var codeGenList, ignoreList []string
if cfg != nil && len(cfg.FileExtensions) > 0 {
types = cfg.FileExtensions
} else {
types = strings.Split(fileTypes, ",")
}
if cfg != nil && len(cfg.CodeGenerated) > 0 {
codeGenList = cfg.CodeGenerated
if cfg != nil && len(cfg.CodeGeneratedPaths) > 0 {
codeGenList = cfg.CodeGeneratedPaths
} else {
codeGenList = strings.Split(codeGenPatterns, ",")
}
if cfg != nil && cfg.Ignore != nil && len(cfg.Ignore) > 0 {
ignoreList = cfg.Ignore
if cfg != nil && len(cfg.IgnorePaths) > 0 {
ignoreList = cfg.IgnorePaths
} else {
ignoreList = strings.Split(ignorePatterns, ",")
}
@@ -135,6 +146,11 @@ func runRollup() error {
}
defer outputFile.Close()
startTime := time.Now()
showProgress := false
progressTicker := time.NewTicker(500 * time.Millisecond)
defer progressTicker.Stop()
// Walk through the directory
err = filepath.Walk(absPath, func(path string, info os.FileInfo, err error) error {
if err != nil {
@@ -150,16 +166,25 @@ func runRollup() error {
// Check if the file should be ignored
if isIgnored(relPath, ignoreList) {
if verbose {
fmt.Printf("Ignoring file: %s\n", relPath)
}
return nil
}
ext := filepath.Ext(path)
for _, t := range types {
if ext == "."+t {
// Verbose logging for processed file
if verbose {
size := humanReadableSize(info.Size())
fmt.Printf("Processing file: %s (%s)\n", relPath, size)
}
// Read file contents
content, err := os.ReadFile(path)
if err != nil {
fmt.Printf("Error reading file %s: %v", path, err)
fmt.Printf("Error reading file %s: %v\n", path, err)
return nil
}
@@ -175,12 +200,43 @@ func runRollup() error {
break
}
}
if !showProgress && time.Since(startTime) > 5*time.Second {
showProgress = true
fmt.Print("This is taking a while (hold tight) ")
}
select {
case <-progressTicker.C:
if showProgress {
fmt.Print(".")
}
default:
}
return nil
})
if err != nil {
return fmt.Errorf("error walking through directory: %v", err)
}
fmt.Printf("Rollup complete. Output file: %s", outputFileName)
if showProgress {
fmt.Println() // Print a newline after the progress dots
}
fmt.Printf("Rollup complete. Output file: %s\n", outputFileName)
return nil
}
func humanReadableSize(size int64) string {
const unit = 1024
if size < unit {
return fmt.Sprintf("%d B", size)
}
div, exp := int64(unit), 0
for n := size / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB", float64(size)/float64(div), "KMGTPE"[exp])
}

172
cmd/files_test.go Normal file
View File

@@ -0,0 +1,172 @@
package cmd
import (
"os"
"path/filepath"
"strings"
"testing"
"github.com/tnypxl/rollup/internal/config"
)
func TestMatchGlob(t *testing.T) {
tests := []struct {
pattern string
path string
expected bool
}{
{"*.go", "file.go", true},
{"*.go", "file.txt", false},
{"**/*.go", "dir/file.go", true},
{"**/*.go", "dir/subdir/file.go", true},
{"dir/*.go", "dir/file.go", true},
{"dir/*.go", "otherdir/file.go", false},
{"**/test_*.go", "internal/test_helper.go", true},
{"docs/**/*.md", "docs/api/endpoints.md", true},
{"docs/**/*.md", "src/docs/readme.md", false},
}
for _, test := range tests {
result := matchGlob(test.pattern, test.path)
if result != test.expected {
t.Errorf("matchGlob(%q, %q) = %v; want %v", test.pattern, test.path, result, test.expected)
}
}
}
func TestIsCodeGenerated(t *testing.T) {
patterns := []string{"generated_*.go", "**/auto_*.go", "**/*_gen.go"}
tests := []struct {
path string
expected bool
}{
{"generated_file.go", true},
{"normal_file.go", false},
{"subdir/auto_file.go", true},
{"subdir/normal_file.go", false},
{"pkg/models_gen.go", true},
{"pkg/handler.go", false},
}
for _, test := range tests {
result := isCodeGenerated(test.path, patterns)
if result != test.expected {
t.Errorf("isCodeGenerated(%q, %v) = %v; want %v", test.path, patterns, result, test.expected)
}
}
}
func TestIsIgnored(t *testing.T) {
patterns := []string{"*.tmp", "**/*.log", ".git/**", "vendor/**"}
tests := []struct {
path string
expected bool
}{
{"file.tmp", true},
{"file.go", false},
{"subdir/file.log", true},
{"subdir/file.txt", false},
{".git/config", true},
{"src/.git/config", false},
{"vendor/package/file.go", true},
{"internal/vendor/file.go", false},
}
for _, test := range tests {
result := isIgnored(test.path, patterns)
if result != test.expected {
t.Errorf("isIgnored(%q, %v) = %v; want %v", test.path, patterns, result, test.expected)
}
}
}
func TestRunRollup(t *testing.T) {
// Create a temporary directory for testing
tempDir, err := os.MkdirTemp("", "rollup_test")
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
defer os.RemoveAll(tempDir)
// Create some test files
files := map[string]string{
"file1.go": "package main\n\nfunc main() {}\n",
"file2.txt": "This is a text file.\n",
"subdir/file3.go": "package subdir\n\nfunc Func() {}\n",
"subdir/file4.json": "{\"key\": \"value\"}\n",
"generated_model.go": "// Code generated DO NOT EDIT.\n\npackage model\n",
"docs/api/readme.md": "# API Documentation\n",
".git/config": "[core]\n\trepositoryformatversion = 0\n",
"vendor/lib/helper.go": "package lib\n\nfunc Helper() {}\n",
}
for name, content := range files {
path := filepath.Join(tempDir, name)
if err := os.MkdirAll(filepath.Dir(path), 0o755); err != nil {
t.Fatalf("Failed to create directory: %v", err)
}
if err := os.WriteFile(path, []byte(content), 0o644); err != nil {
t.Fatalf("Failed to write file: %v", err)
}
}
// Set up test configuration
cfg = &config.Config{
FileExtensions: []string{"go", "txt", "md"},
IgnorePaths: []string{"*.json", ".git/**", "vendor/**"},
CodeGeneratedPaths: []string{"generated_*.go"},
}
// Change working directory to the temp directory
originalWd, _ := os.Getwd()
os.Chdir(tempDir)
defer os.Chdir(originalWd)
// Run the rollup
if err := runRollup(cfg); err != nil {
t.Fatalf("runRollup() failed: %v", err)
}
// Check if the output file was created
outputFiles, err := filepath.Glob("*.rollup.md")
if err != nil {
t.Fatalf("Error globbing for output file: %v", err)
}
if len(outputFiles) == 0 {
allFiles, _ := filepath.Glob("*")
t.Fatalf("No rollup.md file found. Files in directory: %v", allFiles)
}
outputFile := outputFiles[0]
// Read the content of the output file
content, err := os.ReadFile(outputFile)
if err != nil {
t.Fatalf("Failed to read output file: %v", err)
}
// Check if the content includes the expected files
expectedContent := []string{
"# File: file1.go",
"# File: file2.txt",
"# File: subdir/file3.go",
"# File: docs/api/readme.md",
"# File: generated_model.go (Code-generated, Read-only)",
}
for _, expected := range expectedContent {
if !strings.Contains(string(content), expected) {
t.Errorf("Output file does not contain expected content: %s", expected)
}
}
// Check if the ignored files are not included
ignoredContent := []string{
"file4.json",
".git/config",
"vendor/lib/helper.go",
}
for _, ignored := range ignoredContent {
if strings.Contains(string(content), ignored) {
t.Errorf("Output file contains ignored file: %s", ignored)
}
}
}

View File

@@ -38,23 +38,23 @@ func runGenerate(cmd *cobra.Command, args []string) error {
}
cfg := config.Config{
FileTypes: make([]string, 0, len(fileTypes)),
Ignore: []string{"node_modules/**", "vendor/**", ".git/**"},
FileExtensions: make([]string, 0, len(fileTypes)),
IgnorePaths: []string{"node_modules/**", "vendor/**", ".git/**"},
}
for ext := range fileTypes {
cfg.FileTypes = append(cfg.FileTypes, ext)
cfg.FileExtensions = append(cfg.FileExtensions, ext)
}
// Sort file types for consistency
sort.Strings(cfg.FileTypes)
sort.Strings(cfg.FileExtensions)
yamlData, err := yaml.Marshal(&cfg)
if err != nil {
return fmt.Errorf("error marshaling config: %v", err)
}
outputPath := config.DefaultConfigPath()
outputPath := "rollup.yml"
err = os.WriteFile(outputPath, yamlData, 0644)
if err != nil {
return fmt.Errorf("error writing config file: %v", err)

View File

@@ -1,13 +1,14 @@
package cmd
import (
"log"
"github.com/spf13/cobra"
config "github.com/tnypxl/rollup/internal/config"
"github.com/tnypxl/rollup/internal/config"
)
var (
configFile string
cfg *config.Config
verbose bool
)
@@ -16,13 +17,31 @@ var rootCmd = &cobra.Command{
Short: "Rollup is a tool for combining and processing files",
Long: `Rollup is a versatile tool that can combine and process files in various ways.
Use subcommands to perform specific operations.`,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
// Skip config loading for generate and help commands
if cmd.Name() == "generate" || cmd.Name() == "help" {
return nil
}
// Determine config path
configPath := configFile
if configPath == "" {
configPath = "rollup.yml"
}
// Load configuration
var err error
cfg, err = config.Load(configPath)
if err != nil {
log.Printf("Warning: Failed to load configuration from %s: %v", configPath, err)
cfg = &config.Config{} // Use empty config if loading fails
}
return nil
},
}
func Execute(conf *config.Config) error {
cfg = conf
if cfg == nil {
cfg = &config.Config{} // Use an empty config if none is provided
}
func Execute() error {
return rootCmd.Execute()
}

View File

@@ -2,6 +2,8 @@ package cmd
import (
"fmt"
"io"
"log"
"net/url"
"os"
"regexp"
@@ -9,13 +11,13 @@ import (
"time"
"github.com/spf13/cobra"
"github.com/tnypxl/rollup/internal/config"
"github.com/tnypxl/rollup/internal/scraper"
)
var (
urls []string
outputType string
depth int
includeSelector string
excludeSelectors []string
)
@@ -26,178 +28,163 @@ var webCmd = &cobra.Command{
Use: "web",
Short: "Scrape main content from webpages and convert to Markdown",
Long: `Scrape the main content from one or more webpages, ignoring navigational elements, ads, and other UI aspects. Convert the content to a well-structured Markdown file.`,
RunE: runWeb,
PreRunE: func(cmd *cobra.Command, args []string) error {
// Setup logger before initializing Playwright
scraper.SetupLogger(verbose)
// Initialize Playwright for web scraping
if err := scraper.InitPlaywright(); err != nil {
return fmt.Errorf("failed to initialize Playwright: %w", err)
}
return nil
},
RunE: runWeb,
PostRunE: func(cmd *cobra.Command, args []string) error {
// Clean up Playwright resources
scraper.ClosePlaywright()
return nil
},
}
func init() {
webCmd.Flags().StringSliceVarP(&urls, "urls", "u", []string{}, "URLs of the webpages to scrape (comma-separated)")
webCmd.Flags().StringVarP(&outputType, "output", "o", "single", "Output type: 'single' for one file, 'separate' for multiple files")
webCmd.Flags().IntVarP(&depth, "depth", "d", 0, "Depth of link traversal (default: 0, only scrape the given URLs)")
webCmd.Flags().StringVarP(&outputType, "output", "o", "", "Output type: 'single' for one file, 'separate' for multiple files")
webCmd.Flags().StringVar(&includeSelector, "css", "", "CSS selector to extract specific content")
webCmd.Flags().StringSliceVar(&excludeSelectors, "exclude", []string{}, "CSS selectors to exclude from the extracted content (comma-separated)")
}
func runWeb(cmd *cobra.Command, args []string) error {
scraper.SetupLogger(verbose)
logger := log.New(os.Stdout, "WEB: ", log.LstdFlags)
if !verbose {
logger.SetOutput(io.Discard)
}
logger.Printf("Starting web scraping process with verbose mode: %v", verbose)
scraperConfig.Verbose = verbose
// Use config if available, otherwise use command-line flags
var urlConfigs []scraper.URLConfig
if len(urls) == 0 && len(cfg.Scrape.URLs) > 0 {
urlConfigs = make([]scraper.URLConfig, len(cfg.Scrape.URLs))
for i, u := range cfg.Scrape.URLs {
urlConfigs[i] = scraper.URLConfig{
URL: u.URL,
CSSLocator: u.CSSLocator,
ExcludeSelectors: u.ExcludeSelectors,
OutputAlias: u.OutputAlias,
var siteConfigs []scraper.SiteConfig
if len(cfg.Sites) > 0 {
logger.Printf("Using configuration from rollup.yml for %d sites", len(cfg.Sites))
siteConfigs = make([]scraper.SiteConfig, len(cfg.Sites))
for i, site := range cfg.Sites {
siteConfigs[i] = scraper.SiteConfig{
BaseURL: site.BaseURL,
CSSLocator: site.CSSLocator,
ExcludeSelectors: site.ExcludeSelectors,
AllowedPaths: site.AllowedPaths,
ExcludePaths: site.ExcludePaths,
PathOverrides: convertPathOverrides(site.PathOverrides),
}
logger.Printf("Site %d configuration: BaseURL=%s, CSSLocator=%s, AllowedPaths=%v",
i+1, site.BaseURL, site.CSSLocator, site.AllowedPaths)
}
} else {
urlConfigs = make([]scraper.URLConfig, len(urls))
logger.Printf("No sites defined in rollup.yml, falling back to URL-based configuration")
siteConfigs = make([]scraper.SiteConfig, len(urls))
for i, u := range urls {
urlConfigs[i] = scraper.URLConfig{URL: u, CSSLocator: includeSelector}
siteConfigs[i] = scraper.SiteConfig{
BaseURL: u,
CSSLocator: includeSelector,
ExcludeSelectors: excludeSelectors,
AllowedPaths: []string{""},
}
logger.Printf("URL %d configuration: BaseURL=%s, CSSLocator=%s",
i+1, u, includeSelector)
}
}
if len(urlConfigs) == 0 {
return fmt.Errorf("no URLs provided. Use --urls flag with comma-separated URLs or set 'scrape.urls' in the rollup.yml file")
if len(siteConfigs) == 0 {
logger.Println("Error: No sites or URLs provided")
return fmt.Errorf("no sites or URLs provided. Use --urls flag with comma-separated URLs or set 'scrape.sites' in the rollup.yml file")
}
// Set default values for rate limiting
defaultRequestsPerSecond := 1.0
defaultBurstLimit := 3
// Use default values if not set in the configuration
requestsPerSecond := defaultRequestsPerSecond
if cfg.RequestsPerSecond != nil {
requestsPerSecond = *cfg.RequestsPerSecond
}
burstLimit := defaultBurstLimit
if cfg.BurstLimit != nil {
burstLimit = *cfg.BurstLimit
}
scraperConfig := scraper.Config{
URLs: urlConfigs,
Sites: siteConfigs,
OutputType: outputType,
Verbose: verbose,
Scrape: scraper.ScrapeConfig{
RequestsPerSecond: requestsPerSecond,
BurstLimit: burstLimit,
},
}
logger.Printf("Scraper configuration: OutputType=%s, RequestsPerSecond=%f, BurstLimit=%d",
outputType, requestsPerSecond, burstLimit)
logger.Println("Starting scraping process")
startTime := time.Now()
progressTicker := time.NewTicker(time.Second)
defer progressTicker.Stop()
done := make(chan bool)
messagePrinted := false
go func() {
for {
select {
case <-progressTicker.C:
if time.Since(startTime) > 5*time.Second && !messagePrinted {
fmt.Print("This is taking a while (hold tight) ")
messagePrinted = true
} else if messagePrinted {
fmt.Print(".")
}
case <-done:
return
}
}
}()
err := scraper.ScrapeSites(scraperConfig)
done <- true
fmt.Println() // New line after progress indicator
scrapedContent, err := scraper.ScrapeMultipleURLs(scraperConfig)
if err != nil {
logger.Printf("Error occurred during scraping: %v", err)
return fmt.Errorf("error scraping content: %v", err)
}
logger.Println("Scraping completed")
if outputType == "single" {
return writeSingleFile(scrapedContent)
} else {
return writeMultipleFiles(scrapedContent)
}
}
func writeSingleFile(content map[string]string) error {
outputFile := generateDefaultFilename(urls)
file, err := os.Create(outputFile)
if err != nil {
return fmt.Errorf("error creating output file: %v", err)
}
defer file.Close()
for url, c := range content {
_, err = file.WriteString(fmt.Sprintf("# Content from %s\n\n%s\n\n---\n\n", url, c))
if err != nil {
return fmt.Errorf("error writing content to file: %v", err)
}
}
fmt.Printf("Content has been extracted from %d URL(s) and saved to %s\n", len(content), outputFile)
return nil
}
func writeMultipleFiles(content map[string]string) error {
for url, c := range content {
filename := getFilenameFromContent(c, url)
file, err := os.Create(filename)
if err != nil {
return fmt.Errorf("error creating output file %s: %v", filename, err)
}
_, err = file.WriteString(fmt.Sprintf("# Content from %s\n\n%s", url, c))
file.Close()
if err != nil {
return fmt.Errorf("error writing content to file %s: %v", filename, err)
}
fmt.Printf("Content from %s has been saved to %s\n", url, filename)
}
return nil
}
func generateDefaultFilename(urls []string) string {
timestamp := time.Now().Format("20060102-150405")
return fmt.Sprintf("web-%s.rollup.md", timestamp)
}
func scrapeRecursively(urlStr string, depth int) (string, error) {
visited := make(map[string]bool)
return scrapeURL(urlStr, depth, visited)
}
func scrapeURL(urlStr string, depth int, visited map[string]bool) (string, error) {
if depth < 0 || visited[urlStr] {
return "", nil
}
visited[urlStr] = true
content, err := extractAndConvertContent(urlStr)
if err != nil {
return "", err
}
if depth > 0 {
links, err := scraper.ExtractLinks(urlStr)
if err != nil {
return content, fmt.Errorf("error extracting links: %v", err)
}
for _, link := range links {
subContent, err := scrapeURL(link, depth-1, visited)
if err != nil {
fmt.Printf("Warning: Error scraping %s: %v\n", link, err)
continue
}
content += "\n\n---\n\n" + subContent
}
}
return content, nil
}
func extractAndConvertContent(urlStr string) (string, error) {
content, err := scraper.FetchWebpageContent(urlStr)
if err != nil {
return "", fmt.Errorf("error fetching webpage content: %v", err)
}
if includeSelector != "" {
content, err = scraper.ExtractContentWithCSS(content, includeSelector, excludeSelectors)
if err != nil {
return "", fmt.Errorf("error extracting content with CSS: %v", err)
}
}
markdown, err := scraper.ProcessHTMLContent(content, scraper.Config{})
if err != nil {
return "", fmt.Errorf("error processing HTML content: %v", err)
}
parsedURL, err := url.Parse(urlStr)
if err != nil {
return "", fmt.Errorf("error parsing URL: %v", err)
}
header := fmt.Sprintf("# Content from %s\n\n", parsedURL.String())
return header + markdown + "\n\n", nil
}
func getFilenameFromContent(content, url string) string {
func getFilenameFromContent(content, urlStr string) (string, error) {
// Try to extract title from content
titleStart := strings.Index(content, "<title>")
titleEnd := strings.Index(content, "</title>")
if titleStart != -1 && titleEnd != -1 && titleEnd > titleStart {
title := content[titleStart+7 : titleEnd]
return sanitizeFilename(title) + ".md"
title := strings.TrimSpace(content[titleStart+7 : titleEnd])
if title != "" {
return sanitizeFilename(title) + ".rollup.md", nil
}
}
// If no title found, use the URL
return sanitizeFilename(url) + ".md"
// If no title found or title is empty, use the URL
parsedURL, err := url.Parse(urlStr)
if err != nil {
return "", fmt.Errorf("invalid URL: %v", err)
}
if parsedURL.Host == "" {
return "", fmt.Errorf("invalid URL: missing host")
}
filename := parsedURL.Host
if parsedURL.Path != "" && parsedURL.Path != "/" {
filename += strings.TrimSuffix(parsedURL.Path, "/")
}
return sanitizeFilename(filename) + ".rollup.md", nil
}
func sanitizeFilename(name string) string {
@@ -215,3 +202,15 @@ func sanitizeFilename(name string) string {
return name
}
func convertPathOverrides(configOverrides []config.PathOverride) []scraper.PathOverride {
scraperOverrides := make([]scraper.PathOverride, len(configOverrides))
for i, override := range configOverrides {
scraperOverrides[i] = scraper.PathOverride{
Path: override.Path,
CSSLocator: override.CSSLocator,
ExcludeSelectors: override.ExcludeSelectors,
}
}
return scraperOverrides
}

99
cmd/web_test.go Normal file
View File

@@ -0,0 +1,99 @@
package cmd
import (
"testing"
"github.com/tnypxl/rollup/internal/config"
)
func TestConvertPathOverrides(t *testing.T) {
configOverrides := []config.PathOverride{
{
Path: "/blog",
CSSLocator: "article",
ExcludeSelectors: []string{".ads", ".comments"},
},
{
Path: "/products",
CSSLocator: ".product-description",
ExcludeSelectors: []string{".related-items"},
},
}
scraperOverrides := convertPathOverrides(configOverrides)
if len(scraperOverrides) != len(configOverrides) {
t.Errorf("Expected %d overrides, got %d", len(configOverrides), len(scraperOverrides))
}
for i, override := range scraperOverrides {
if override.Path != configOverrides[i].Path {
t.Errorf("Expected Path %s, got %s", configOverrides[i].Path, override.Path)
}
if override.CSSLocator != configOverrides[i].CSSLocator {
t.Errorf("Expected CSSLocator %s, got %s", configOverrides[i].CSSLocator, override.CSSLocator)
}
if len(override.ExcludeSelectors) != len(configOverrides[i].ExcludeSelectors) {
t.Errorf("Expected %d ExcludeSelectors, got %d", len(configOverrides[i].ExcludeSelectors), len(override.ExcludeSelectors))
}
for j, selector := range override.ExcludeSelectors {
if selector != configOverrides[i].ExcludeSelectors[j] {
t.Errorf("Expected ExcludeSelector %s, got %s", configOverrides[i].ExcludeSelectors[j], selector)
}
}
}
}
func TestSanitizeFilename(t *testing.T) {
tests := []struct {
input string
expected string
}{
{"Hello, World!", "Hello_World"},
{"file/with/path", "file_with_path"},
{"file.with.dots", "file_with_dots"},
{"___leading_underscores___", "leading_underscores"},
{"", "untitled"},
{"!@#$%^&*()", "untitled"},
}
for _, test := range tests {
result := sanitizeFilename(test.input)
if result != test.expected {
t.Errorf("sanitizeFilename(%q) = %q; want %q", test.input, result, test.expected)
}
}
}
func TestGetFilenameFromContent(t *testing.T) {
tests := []struct {
content string
url string
expected string
expectErr bool
}{
{"<title>Test Page</title>", "http://example.com", "Test_Page.rollup.md", false},
{"No title here", "http://example.com/page", "example_com_page.rollup.md", false},
{"<title> Trim Me </title>", "http://example.com", "Trim_Me.rollup.md", false},
{"<title></title>", "http://example.com", "example_com.rollup.md", false},
{"<title> </title>", "http://example.com", "example_com.rollup.md", false},
{"Invalid URL", "not a valid url", "", true},
{"No host", "http://", "", true},
}
for _, test := range tests {
result, err := getFilenameFromContent(test.content, test.url)
if test.expectErr {
if err == nil {
t.Errorf("getFilenameFromContent(%q, %q) expected an error, but got none", test.content, test.url)
}
} else {
if err != nil {
t.Errorf("getFilenameFromContent(%q, %q) unexpected error: %v", test.content, test.url, err)
}
if result != test.expected {
t.Errorf("getFilenameFromContent(%q, %q) = %q; want %q", test.content, test.url, result, test.expected)
}
}
}
}

21
docs/CHANGELOG.md Normal file
View File

@@ -0,0 +1,21 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.0.3] - 2024-09-22
### Added
- Implemented web scraping functionality using Playwright
- Added support for CSS selectors to extract specific content
- Introduced rate limiting for web requests
- Created configuration options for scraping settings
### Changed
- Improved error handling and logging throughout the application
- Enhanced URL parsing and validation
### Fixed
- Resolved issues with concurrent scraping operations

2
go.mod
View File

@@ -5,6 +5,7 @@ go 1.23
require (
github.com/JohannesKaufmann/html-to-markdown v1.6.0
github.com/spf13/cobra v1.8.1
golang.org/x/time v0.6.0
)
require (
@@ -21,7 +22,6 @@ require (
github.com/PuerkitoBio/goquery v1.9.2
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/playwright-community/playwright-go v0.4501.1
github.com/russross/blackfriday/v2 v2.1.0
github.com/spf13/pflag v1.0.5 // indirect
gopkg.in/yaml.v2 v2.4.0
)

3
go.sum
View File

@@ -32,7 +32,6 @@ github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZb
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M=
github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sebdah/goldie/v2 v2.5.3 h1:9ES/mNN+HNUbNWpVAlrzuZ7jE+Nrczbj8uFRjM7624Y=
github.com/sebdah/goldie/v2 v2.5.3/go.mod h1:oZ9fp0+se1eapSRjfYbsV/0Hqhbuu3bJVvKI/NNtssI=
@@ -103,6 +102,8 @@ golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.6.0 h1:eTDhh4ZXt5Qf0augr54TN6suAUudPcawVZeIAPU7D4U=
golang.org/x/time v0.6.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=

View File

@@ -7,23 +7,64 @@ import (
"gopkg.in/yaml.v2"
)
// Config represents the configuration for the rollup tool
type Config struct {
FileTypes []string `yaml:"file_types"`
Ignore []string `yaml:"ignore"`
CodeGenerated []string `yaml:"code_generated"`
Scrape ScrapeConfig `yaml:"scrape"`
// FileExtensions is a list of file extensions to include in the rollup
FileExtensions []string `yaml:"file_extensions"`
// IgnorePaths is a list of glob patterns for paths to ignore
IgnorePaths []string `yaml:"ignore_paths"`
// CodeGeneratedPaths is a list of glob patterns for code-generated files
CodeGeneratedPaths []string `yaml:"code_generated_paths"`
// Sites is a list of site configurations for web scraping
Sites []SiteConfig `yaml:"sites"`
// OutputType specifies how the output should be generated
OutputType string `yaml:"output_type"`
// RequestsPerSecond limits the rate of web requests
RequestsPerSecond *float64 `yaml:"requests_per_second,omitempty"`
// BurstLimit sets the maximum burst size for rate limiting
BurstLimit *int `yaml:"burst_limit,omitempty"`
}
type ScrapeConfig struct {
URLs []URLConfig `yaml:"urls"`
OutputType string `yaml:"output_type"`
}
// SiteConfig contains configuration for scraping a single site
type SiteConfig struct {
// BaseURL is the starting point for scraping this site
BaseURL string `yaml:"base_url"`
type URLConfig struct {
URL string `yaml:"url"`
CSSLocator string `yaml:"css_locator"`
// CSSLocator is used to extract specific content
CSSLocator string `yaml:"css_locator"`
// ExcludeSelectors lists CSS selectors for content to exclude
ExcludeSelectors []string `yaml:"exclude_selectors"`
// AllowedPaths lists paths that are allowed to be scraped
AllowedPaths []string `yaml:"allowed_paths"`
// ExcludePaths lists paths that should not be scraped
ExcludePaths []string `yaml:"exclude_paths"`
// FileNamePrefix provides the base name for output files
FileNamePrefix string `yaml:"file_name_prefix"`
// PathOverrides allows for path-specific configurations
PathOverrides []PathOverride `yaml:"path_overrides"`
}
// PathOverride allows for path-specific configurations
type PathOverride struct {
// Path is the URL path this override applies to
Path string `yaml:"path"`
// CSSLocator overrides the site-wide CSS locator for this path
CSSLocator string `yaml:"css_locator"`
// ExcludeSelectors overrides the site-wide exclude selectors for this path
ExcludeSelectors []string `yaml:"exclude_selectors"`
OutputAlias string `yaml:"output_alias"`
}
func Load(configPath string) (*Config, error) {
@@ -38,15 +79,36 @@ func Load(configPath string) (*Config, error) {
return nil, fmt.Errorf("error parsing config file: %v", err)
}
if err := config.Validate(); err != nil {
return nil, fmt.Errorf("invalid configuration: %v", err)
}
return &config, nil
}
func DefaultConfigPath() string {
return "rollup.yml"
}
// Validate checks the configuration for any invalid values
func (c *Config) Validate() error {
if len(c.FileExtensions) == 0 && len(c.Sites) == 0 {
return fmt.Errorf("file_extensions or sites must be specified")
}
func FileExists(filename string) bool {
_, err := os.Stat(filename)
return err == nil
}
if c.OutputType != "" && c.OutputType != "single" && c.OutputType != "separate" {
return fmt.Errorf("output_type must be 'single' or 'separate'")
}
if c.RequestsPerSecond != nil && *c.RequestsPerSecond <= 0 {
return fmt.Errorf("requests_per_second must be positive")
}
if c.BurstLimit != nil && *c.BurstLimit <= 0 {
return fmt.Errorf("burst_limit must be positive")
}
for _, site := range c.Sites {
if site.BaseURL == "" {
return fmt.Errorf("base_url must be specified for each site")
}
}
return nil
}

View File

@@ -0,0 +1,173 @@
package config
import (
"os"
"reflect"
"testing"
)
func TestLoad(t *testing.T) {
// Create a temporary config file
content := []byte(`
file_extensions:
- go
- md
ignore_paths:
- "*.tmp"
- "**/*.log"
code_generated_paths:
- "generated_*.go"
sites:
- base_url: "https://example.com"
css_locator: "main"
exclude_selectors:
- ".ads"
max_depth: 2
allowed_paths:
- "/blog"
exclude_paths:
- "/admin"
file_name_prefix: "example"
path_overrides:
- path: "/special"
css_locator: ".special-content"
exclude_selectors:
- ".sidebar"
output_type: "single"
requests_per_second: 1.0
burst_limit: 5
`)
tmpfile, err := os.CreateTemp("", "config*.yml")
if err != nil {
t.Fatalf("Failed to create temp file: %v", err)
}
defer os.Remove(tmpfile.Name())
if _, err = tmpfile.Write(content); err != nil {
t.Fatalf("Failed to write to temp file: %v", err)
}
if err = tmpfile.Close(); err != nil {
t.Fatalf("Failed to close temp file: %v", err)
}
// Test loading the config
config, err := Load(tmpfile.Name())
if err != nil {
t.Fatalf("Load() failed: %v", err)
}
// Check if the loaded config matches the expected values
rps := 1.0
bl := 5
expectedConfig := &Config{
FileExtensions: []string{"go", "md"},
IgnorePaths: []string{"*.tmp", "**/*.log"},
CodeGeneratedPaths: []string{"generated_*.go"},
Sites: []SiteConfig{
{
BaseURL: "https://example.com",
CSSLocator: "main",
ExcludeSelectors: []string{".ads"},
AllowedPaths: []string{"/blog"},
ExcludePaths: []string{"/admin"},
FileNamePrefix: "example",
PathOverrides: []PathOverride{
{
Path: "/special",
CSSLocator: ".special-content",
ExcludeSelectors: []string{".sidebar"},
},
},
},
},
OutputType: "single",
RequestsPerSecond: &rps,
BurstLimit: &bl,
}
if !reflect.DeepEqual(config, expectedConfig) {
t.Errorf("Loaded config does not match expected config.\nGot: %+v\nWant: %+v", config, expectedConfig)
}
}
func TestValidate(t *testing.T) {
tests := []struct {
name string
config Config
wantErr bool
}{
{
name: "Valid config",
config: Config{
FileExtensions: []string{"go"},
Sites: []SiteConfig{
{BaseURL: "https://example.com"},
},
},
wantErr: false,
},
{
name: "No file extensions",
config: Config{},
wantErr: true,
},
{
name: "Invalid requests per second",
config: Config{
FileExtensions: []string{"go"},
RequestsPerSecond: func() *float64 { f := -1.0; return &f }(),
},
wantErr: true,
},
{
name: "Invalid burst limit",
config: Config{
FileExtensions: []string{"go"},
BurstLimit: func() *int { i := -1; return &i }(),
},
wantErr: true,
},
{
name: "Site without base URL",
config: Config{
FileExtensions: []string{"go"},
Sites: []SiteConfig{{}},
},
wantErr: true,
},
{
name: "Valid output type single",
config: Config{
FileExtensions: []string{"go"},
OutputType: "single",
},
wantErr: false,
},
{
name: "Valid output type separate",
config: Config{
FileExtensions: []string{"go"},
OutputType: "separate",
},
wantErr: false,
},
{
name: "Invalid output type",
config: Config{
FileExtensions: []string{"go"},
OutputType: "invalid",
},
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := tt.config.Validate()
if (err != nil) != tt.wantErr {
t.Errorf("Validate() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}

View File

@@ -1,17 +1,23 @@
package scraper
import (
"context"
"fmt"
"io/ioutil"
"io"
"log"
"math/rand"
"net/url"
"os"
"path/filepath"
"regexp"
"strings"
"sync"
"time"
md "github.com/JohannesKaufmann/html-to-markdown"
"github.com/PuerkitoBio/goquery"
"github.com/playwright-community/playwright-go"
md "github.com/JohannesKaufmann/html-to-markdown"
"golang.org/x/time/rate"
)
var logger *log.Logger
@@ -23,51 +29,190 @@ var (
// Config holds the scraper configuration
type Config struct {
URLs []URLConfig
Sites []SiteConfig
OutputType string
Verbose bool
Scrape ScrapeConfig
}
// ScrapeMultipleURLs scrapes multiple URLs concurrently
func ScrapeMultipleURLs(config Config) (map[string]string, error) {
// ScrapeConfig holds the scraping-specific configuration
type ScrapeConfig struct {
RequestsPerSecond float64
BurstLimit int
}
// SiteConfig holds configuration for a single site
type SiteConfig struct {
BaseURL string
CSSLocator string
ExcludeSelectors []string
AllowedPaths []string
ExcludePaths []string
FileNamePrefix string
PathOverrides []PathOverride
}
// PathOverride holds path-specific overrides
type PathOverride struct {
Path string
CSSLocator string
ExcludeSelectors []string
}
func ScrapeSites(config Config) error {
logger.Println("Starting ScrapeSites function - Verbose mode is active")
results := make(chan struct {
url string
content string
site SiteConfig // Add site config to track which site the content came from
err error
}, len(config.URLs))
})
for _, urlConfig := range config.URLs {
go func(cfg URLConfig) {
content, err := scrapeURL(cfg)
results <- struct {
url string
content string
err error
}{cfg.URL, content, err}
}(urlConfig)
limiter := rate.NewLimiter(rate.Limit(config.Scrape.RequestsPerSecond), config.Scrape.BurstLimit)
logger.Printf("Rate limiter configured with %f requests per second and burst limit of %d\n",
config.Scrape.RequestsPerSecond, config.Scrape.BurstLimit)
var wg sync.WaitGroup
totalURLs := 0
for _, site := range config.Sites {
totalURLs += len(site.AllowedPaths)
}
for _, site := range config.Sites {
logger.Printf("Processing site: %s\n", site.BaseURL)
wg.Add(1)
go func(site SiteConfig) {
defer wg.Done()
for _, path := range site.AllowedPaths {
fullURL := site.BaseURL + path
logger.Printf("Queueing URL for scraping: %s\n", fullURL)
scrapeSingleURL(fullURL, site, results, limiter)
}
}(site)
}
scrapedContent := make(map[string]string)
for i := 0; i < len(config.URLs); i++ {
result := <-results
go func() {
wg.Wait()
close(results)
logger.Println("All goroutines completed, results channel closed")
}()
// Use a map that includes site configuration
scrapedContent := make(map[string]struct {
content string
site SiteConfig
})
for result := range results {
if result.err != nil {
logger.Printf("Error scraping %s: %v\n", result.url, result.err)
continue
}
scrapedContent[result.url] = result.content
logger.Printf("Successfully scraped content from %s (length: %d)\n",
result.url, len(result.content))
scrapedContent[result.url] = struct {
content string
site SiteConfig
}{
content: result.content,
site: result.site,
}
}
return scrapedContent, nil
logger.Printf("Total URLs processed: %d\n", totalURLs)
logger.Printf("Successfully scraped content from %d URLs\n", len(scrapedContent))
return SaveToFiles(scrapedContent, config)
}
func scrapeURL(config URLConfig) (string, error) {
content, err := FetchWebpageContent(config.URL)
func scrapeSingleURL(url string, site SiteConfig, results chan<- struct {
url string
content string
site SiteConfig
err error
}, limiter *rate.Limiter) {
logger.Printf("Starting to scrape URL: %s\n", url)
err := limiter.Wait(context.Background())
if err != nil {
results <- struct {
url string
content string
site SiteConfig
err error
}{url, "", site, fmt.Errorf("rate limiter error: %v", err)}
return
}
cssLocator, excludeSelectors := getOverrides(url, site)
content, err := scrapeURL(url, cssLocator, excludeSelectors)
if err != nil {
results <- struct {
url string
content string
site SiteConfig
err error
}{url, "", site, err}
return
}
results <- struct {
url string
content string
site SiteConfig
err error
}{url, content, site, nil}
}
func isAllowedURL(urlStr string, site SiteConfig) bool {
parsedURL, err := url.Parse(urlStr)
if err != nil {
return false
}
baseURL, _ := url.Parse(site.BaseURL)
if parsedURL.Host != baseURL.Host {
return false
}
path := parsedURL.Path
for _, allowedPath := range site.AllowedPaths {
if strings.HasPrefix(path, allowedPath) {
for _, excludePath := range site.ExcludePaths {
if strings.HasPrefix(path, excludePath) {
return false
}
}
return true
}
}
return false
}
func getOverrides(urlStr string, site SiteConfig) (string, []string) {
parsedURL, _ := url.Parse(urlStr)
path := parsedURL.Path
for _, override := range site.PathOverrides {
if strings.HasPrefix(path, override.Path) {
if override.CSSLocator != "" {
return override.CSSLocator, override.ExcludeSelectors
}
return site.CSSLocator, override.ExcludeSelectors
}
}
return site.CSSLocator, site.ExcludeSelectors
}
func scrapeURL(url, cssLocator string, excludeSelectors []string) (string, error) {
content, err := FetchWebpageContent(url)
if err != nil {
return "", err
}
if config.CSSLocator != "" {
content, err = ExtractContentWithCSS(content, config.CSSLocator, config.ExcludeSelectors)
if cssLocator != "" {
content, err = ExtractContentWithCSS(content, cssLocator, excludeSelectors)
if err != nil {
return "", err
}
@@ -90,9 +235,14 @@ func getFilenameFromContent(content, url string) string {
}
func sanitizeFilename(name string) string {
// Remove any character that isn't alphanumeric, dash, or underscore
reg, _ := regexp.Compile("[^a-zA-Z0-9-_]+")
return reg.ReplaceAllString(name, "_")
// Replace all non-alphanumeric characters with dashes
reg := regexp.MustCompile("[^a-zA-Z0-9]+")
name = reg.ReplaceAllString(name, "-")
// Remove any leading or trailing dashes
name = strings.Trim(name, "-")
// Collapse multiple consecutive dashes into one
reg = regexp.MustCompile("-+")
return reg.ReplaceAllString(name, "-")
}
// URLConfig holds configuration for a single URL
@@ -100,15 +250,15 @@ type URLConfig struct {
URL string
CSSLocator string
ExcludeSelectors []string
OutputAlias string
FileNamePrefix string
}
// SetupLogger initializes the logger based on the verbose flag
func SetupLogger(verbose bool) {
if verbose {
logger = log.New(log.Writer(), "SCRAPER: ", log.LstdFlags)
logger = log.New(os.Stdout, "SCRAPER: ", log.LstdFlags)
} else {
logger = log.New(ioutil.Discard, "", 0)
logger = log.New(io.Discard, "", 0)
}
}
@@ -128,7 +278,7 @@ func InitPlaywright() error {
return fmt.Errorf("could not start Playwright: %v", err)
}
userAgent := "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
userAgent := "Mozilla/5.0 (Linux; Android 15; Pixel 9 Build/AP3A.241105.008) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.6723.106 Mobile Safari/537.36 OPX/2.5"
browser, err = pw.Chromium.Launch(playwright.BrowserTypeLaunchOptions{
Args: []string{fmt.Sprintf("--user-agent=%s", userAgent)},
@@ -151,6 +301,129 @@ func ClosePlaywright() {
}
}
// InitBrowser initializes the browser
func InitBrowser() error {
return InitPlaywright()
}
// CloseBrowser closes the browser
func CloseBrowser() {
ClosePlaywright()
}
// SaveToFiles writes the scraped content to files based on output type
func SaveToFiles(content map[string]struct {
content string
site SiteConfig
}, config Config) error {
if config.OutputType == "" {
config.OutputType = "separate" // default to separate files if not specified
}
switch config.OutputType {
case "single":
if err := os.MkdirAll("output", 0755); err != nil {
return fmt.Errorf("failed to create output directory: %v", err)
}
var combined strings.Builder
for url, data := range content {
combined.WriteString(fmt.Sprintf("## %s\n\n", url))
combined.WriteString(data.content)
combined.WriteString("\n\n")
}
return os.WriteFile(filepath.Join("output", "combined.md"), []byte(combined.String()), 0644)
case "separate":
if err := os.MkdirAll("output", 0755); err != nil {
return fmt.Errorf("failed to create output directory: %v", err)
}
// Group content by site and path
contentBySitePath := make(map[string]map[string]string)
for urlStr, data := range content {
parsedURL, err := url.Parse(urlStr)
if err != nil {
logger.Printf("Warning: Could not parse URL %s: %v", urlStr, err)
continue
}
// Find matching allowed path for this URL
var matchingPath string
for _, path := range data.site.AllowedPaths {
if strings.HasPrefix(parsedURL.Path, path) {
matchingPath = path
break
}
}
if matchingPath == "" {
logger.Printf("Warning: No matching allowed path for URL %s", urlStr)
continue
}
siteKey := fmt.Sprintf("%s-%s", data.site.BaseURL, data.site.FileNamePrefix)
if contentBySitePath[siteKey] == nil {
contentBySitePath[siteKey] = make(map[string]string)
}
// Combine all content for the same path
if existing, exists := contentBySitePath[siteKey][matchingPath]; exists {
contentBySitePath[siteKey][matchingPath] = existing + "\n\n" + data.content
} else {
contentBySitePath[siteKey][matchingPath] = data.content
}
}
// Write files for each site and path
for siteKey, pathContent := range contentBySitePath {
for path, content := range pathContent {
parts := strings.SplitN(siteKey, "-", 2) // Split only on first hyphen
prefix := parts[1] // Get the FileNamePrefix part
if prefix == "" {
prefix = "doc" // default prefix if none specified
}
normalizedPath := NormalizePathForFilename(path)
if normalizedPath == "" {
normalizedPath = "index"
}
filename := filepath.Join("output", fmt.Sprintf("%s-%s.md",
prefix, normalizedPath))
// Ensure we don't have empty files
if strings.TrimSpace(content) == "" {
logger.Printf("Skipping empty content for path %s", path)
continue
}
if err := os.WriteFile(filename, []byte(content), 0644); err != nil {
return fmt.Errorf("failed to write file %s: %v", filename, err)
}
logger.Printf("Wrote content to %s", filename)
}
}
return nil
default:
return fmt.Errorf("unsupported output type: %s", config.OutputType)
}
}
// NormalizePathForFilename converts a URL path into a valid filename component
func NormalizePathForFilename(urlPath string) string {
// Remove leading/trailing slashes
path := strings.Trim(urlPath, "/")
// Replace all non-alphanumeric characters with dashes
reg := regexp.MustCompile("[^a-zA-Z0-9]+")
path = reg.ReplaceAllString(path, "-")
// Remove any leading or trailing dashes
path = strings.Trim(path, "-")
// Collapse multiple consecutive dashes into one
reg = regexp.MustCompile("-+")
path = reg.ReplaceAllString(path, "-")
return path
}
// FetchWebpageContent retrieves the content of a webpage using Playwright
func FetchWebpageContent(urlStr string) (string, error) {
logger.Printf("Fetching webpage content for URL: %s\n", urlStr)
@@ -189,7 +462,9 @@ func FetchWebpageContent(urlStr string) (string, error) {
}
logger.Println("Waiting for body element")
_, err = page.WaitForSelector("body", playwright.PageWaitForSelectorOptions{
bodyElement := page.Locator("body")
err = bodyElement.WaitFor(playwright.LocatorWaitForOptions{
State: playwright.WaitForSelectorStateVisible,
})
if err != nil {
@@ -206,7 +481,7 @@ func FetchWebpageContent(urlStr string) (string, error) {
if content == "" {
logger.Println(" content is empty, falling back to body content")
content, err = page.InnerHTML("body")
content, err = bodyElement.InnerHTML()
if err != nil {
logger.Printf("Error getting body content: %v\n", err)
return "", fmt.Errorf("could not get body content: %v", err)
@@ -290,7 +565,8 @@ func scrollPage(page playwright.Page) error {
previousHeight = currentHeight
page.WaitForTimeout(500)
// Wait for content to load before scrolling again
time.Sleep(100 * time.Millisecond)
}
logger.Println("Scrolling back to top")
@@ -304,39 +580,6 @@ func scrollPage(page playwright.Page) error {
return nil
}
// ExtractLinks extracts all links from the given URL
func ExtractLinks(urlStr string) ([]string, error) {
logger.Printf("Extracting links from URL: %s\n", urlStr)
page, err := browser.NewPage()
if err != nil {
return nil, fmt.Errorf("could not create page: %v", err)
}
defer page.Close()
if _, err = page.Goto(urlStr, playwright.PageGotoOptions{
WaitUntil: playwright.WaitUntilStateNetworkidle,
}); err != nil {
return nil, fmt.Errorf("could not go to page: %v", err)
}
links, err := page.Evaluate(`() => {
const anchors = document.querySelectorAll('a');
return Array.from(anchors).map(a => a.href);
}`)
if err != nil {
return nil, fmt.Errorf("could not extract links: %v", err)
}
var result []string
for _, link := range links.([]interface{}) {
result = append(result, link.(string))
}
logger.Printf("Extracted %d links\n", len(result))
return result, nil
}
// ExtractContentWithCSS extracts content from HTML using a CSS selector
func ExtractContentWithCSS(content, includeSelector string, excludeSelectors []string) (string, error) {
logger.Printf("Extracting content with CSS selector: %s\n", includeSelector)
@@ -364,6 +607,23 @@ func ExtractContentWithCSS(content, includeSelector string, excludeSelectors []s
return "", fmt.Errorf("error extracting content with CSS selector: %v", err)
}
// Trim leading and trailing whitespace
selectedContent = strings.TrimSpace(selectedContent)
// Normalize newlines
selectedContent = strings.ReplaceAll(selectedContent, "\r\n", "\n")
selectedContent = strings.ReplaceAll(selectedContent, "\r", "\n")
// Remove indentation while preserving structure
lines := strings.Split(selectedContent, "\n")
for i, line := range lines {
lines[i] = strings.TrimSpace(line)
}
selectedContent = strings.Join(lines, "\n")
// Remove any leading or trailing newlines
selectedContent = strings.Trim(selectedContent, "\n")
logger.Printf("Extracted content length: %d\n", len(selectedContent))
return selectedContent, nil
}

View File

@@ -0,0 +1,181 @@
package scraper
import (
"io"
"log"
// "net/http"
// "net/http/httptest"
"reflect"
"strings"
"testing"
)
func TestIsAllowedURL(t *testing.T) {
site := SiteConfig{
BaseURL: "https://example.com",
AllowedPaths: []string{"/blog", "/products"},
ExcludePaths: []string{"/admin", "/private"},
}
tests := []struct {
url string
expected bool
}{
{"https://example.com/blog/post1", true},
{"https://example.com/products/item1", true},
{"https://example.com/admin/dashboard", false},
{"https://example.com/private/data", false},
{"https://example.com/other/page", false},
{"https://othersite.com/blog/post1", false},
}
for _, test := range tests {
result := isAllowedURL(test.url, site)
if result != test.expected {
t.Errorf("isAllowedURL(%q) = %v, want %v", test.url, result, test.expected)
}
}
}
func TestGetOverrides(t *testing.T) {
site := SiteConfig{
CSSLocator: "main",
ExcludeSelectors: []string{".ads"},
PathOverrides: []PathOverride{
{
Path: "/special",
CSSLocator: ".special-content",
ExcludeSelectors: []string{".sidebar"},
},
},
}
tests := []struct {
url string
expectedLocator string
expectedExcludes []string
}{
{"https://example.com/normal", "main", []string{".ads"}},
{"https://example.com/special", ".special-content", []string{".sidebar"}},
{"https://example.com/special/page", ".special-content", []string{".sidebar"}},
}
for _, test := range tests {
locator, excludes := getOverrides(test.url, site)
if locator != test.expectedLocator {
t.Errorf("getOverrides(%q) locator = %q, want %q", test.url, locator, test.expectedLocator)
}
if !reflect.DeepEqual(excludes, test.expectedExcludes) {
t.Errorf("getOverrides(%q) excludes = %v, want %v", test.url, excludes, test.expectedExcludes)
}
}
}
func TestExtractContentWithCSS(t *testing.T) {
// Initialize logger for testing
logger = log.New(io.Discard, "", 0)
html := `
<html>
<body>
<main>
<h1>Main Content</h1>
<p>This is the main content.</p>
<div class="ads">Advertisement</div>
</main>
<aside>Sidebar content</aside>
</body>
</html>
`
tests := []struct {
includeSelector string
excludeSelectors []string
expected string
}{
{"main", nil, "<h1>Main Content</h1>\n<p>This is the main content.</p>\n<div class=\"ads\">Advertisement</div>"},
{"main", []string{".ads"}, "<h1>Main Content</h1>\n<p>This is the main content.</p>"},
{"aside", nil, "Sidebar content"},
}
for _, test := range tests {
result, err := ExtractContentWithCSS(html, test.includeSelector, test.excludeSelectors)
if err != nil {
t.Errorf("ExtractContentWithCSS() returned error: %v", err)
continue
}
if strings.TrimSpace(result) != strings.TrimSpace(test.expected) {
t.Errorf("ExtractContentWithCSS() = %q, want %q", result, test.expected)
}
}
}
func TestProcessHTMLContent(t *testing.T) {
html := `
<html>
<body>
<h1>Test Heading</h1>
<p>This is a <strong>test</strong> paragraph.</p>
<ul>
<li>Item 1</li>
<li>Item 2</li>
</ul>
</body>
</html>
`
expected := strings.TrimSpace(`
# Test Heading
This is a **test** paragraph.
- Item 1
- Item 2
`)
result, err := ProcessHTMLContent(html, Config{})
if err != nil {
t.Fatalf("ProcessHTMLContent() returned error: %v", err)
}
if strings.TrimSpace(result) != expected {
t.Errorf("ProcessHTMLContent() = %q, want %q", result, expected)
}
}
// func TestExtractLinks(t *testing.T) {
// // Initialize Playwright before running the test
// if err := InitPlaywright(); err != nil {
// t.Fatalf("Failed to initialize Playwright: %v", err)
// }
// defer ClosePlaywright()
// server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// w.Header().Set("Content-Type", "text/html")
// w.Write([]byte(`
// <html>
// <body>
// <a href="https://example.com/page1">Page 1</a>
// <a href="https://example.com/page2">Page 2</a>
// <a href="https://othersite.com">Other Site</a>
// </body>
// </html>
// `))
// }))
// defer server.Close()
// links, err := ExtractLinks(server.URL)
// if err != nil {
// t.Fatalf("ExtractLinks() returned error: %v", err)
// }
// expectedLinks := []string{
// "https://example.com/page1",
// "https://example.com/page2",
// "https://othersite.com",
// }
// if !reflect.DeepEqual(links, expectedLinks) {
// t.Errorf("ExtractLinks() = %v, want %v", links, expectedLinks)
// }
// }

31
main.go
View File

@@ -2,42 +2,13 @@ package main
import (
"fmt"
"log"
"os"
"github.com/tnypxl/rollup/cmd"
"github.com/tnypxl/rollup/internal/config"
"github.com/tnypxl/rollup/internal/scraper"
)
var cfg *config.Config
func main() {
// Check if the command is "help"
isHelpCommand := len(os.Args) > 1 && (os.Args[1] == "help" || os.Args[1] == "--help" || os.Args[1] == "-h")
var cfg *config.Config
var err error
if !isHelpCommand {
configPath := config.DefaultConfigPath()
cfg, err = config.Load(configPath)
if err != nil {
log.Printf("Warning: Failed to load configuration: %v", err)
// Continue execution without a config file
}
// Initialize the scraper logger with default verbosity (false)
scraper.SetupLogger(false)
err = scraper.InitPlaywright()
if err != nil {
log.Fatalf("Failed to initialize Playwright: %v", err)
}
defer scraper.ClosePlaywright()
}
if err := cmd.Execute(cfg); err != nil {
if err := cmd.Execute(); err != nil {
fmt.Println(err)
os.Exit(1)
}