aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorRené 'Necoro' Neumann <necoro@necoro.eu>2020-04-25 17:54:41 +0200
committerRené 'Necoro' Neumann <necoro@necoro.eu>2020-04-25 17:54:41 +0200
commit5886c4396c7e24c8d86c0f18c2e7215f792e25fb (patch)
tree94f4bf00679cee58f59a78f9316459daf035dac9
parent60ff245e6d965785d54a212b2a4ddd9b16159460 (diff)
downloadfeed2imap-go-5886c4396c7e24c8d86c0f18c2e7215f792e25fb.tar.gz
feed2imap-go-5886c4396c7e24c8d86c0f18c2e7215f792e25fb.tar.bz2
feed2imap-go-5886c4396c7e24c8d86c0f18c2e7215f792e25fb.zip
Shortcut: do nothing, if there is no feed left
-rw-r--r--internal/feed/state.go4
-rw-r--r--main.go5
2 files changed, 9 insertions, 0 deletions
diff --git a/internal/feed/state.go b/internal/feed/state.go
index 8efef5e..2a0a1e1 100644
--- a/internal/feed/state.go
+++ b/internal/feed/state.go
@@ -82,3 +82,7 @@ func (state *State) RemoveUndue() {
}
}
}
+
+func (state *State) NumFeeds() int {
+ return len(state.feeds)
+}
diff --git a/main.go b/main.go
index e63b883..415d19f 100644
--- a/main.go
+++ b/main.go
@@ -68,6 +68,11 @@ func run() error {
state.RemoveUndue()
+ if state.NumFeeds() == 0 {
+ // nothing to do
+ return nil
+ }
+
if success := state.Fetch(); success == 0 {
return fmt.Errorf("No successfull feed fetch.")
}
infinite loops when failing to acquire a lockfile. This could happen on broken setups or under crazy server load. Incidentally, this also fixes a lurking bug in cache_lock() where an uninitialized returnvalue was used. Signed-off-by: Lars Hjemli <hjemli@gmail.com> 2006-12-11Let 'make install' clear all cachefilesLars Hjemli1-0/+2 Signed-off-by: Lars Hjemli <hjemli@gmail.com> 2006-12-11Fix cache algorithm loopholeLars Hjemli3-11/+16 This closes the door for unneccessary calls to cgit_fill_cache(). Noticed by Linus. Signed-off-by: Lars Hjemli <hjemli@gmail.com> 2006-12-10Add version identifier in generated filesLars Hjemli2-9/+14 Signed-off-by: Lars Hjemli <hjemli@gmail.com> 2006-12-10Add license file and copyright noticesLars Hjemli5-0/+372 Signed-off-by: Lars Hjemli <hjemli@gmail.com> 2006-12-10Add caching infrastructureLars Hjemli9-28/+353 This enables internal caching of page output. Page requests are split into four groups: 1) repo listing (front page) 2) repo summary 3) repo pages w/symbolic references in query string 4) repo pages w/constant sha1's in query string Each group has a TTL specified in minutes. When a page is requested, a cached filename is stat(2)'ed and st_mtime is compared to time(2). If TTL has expired (or the file didn't exist), the cached file is regenerated. When generating a cached file, locking is used to avoid parallell processing of the request. If multiple processes tries to aquire the same lock, the ones who fail to get the lock serves the (expired) cached file. If the cached file don't exist, the process instead calls sched_yield(2) before restarting the request processing. Signed-off-by: Lars Hjemli <hjemli@gmail.com>