summaryrefslogtreecommitdiff
path: root/cache.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Move cgit_repo into cgit_contextLars Hjemli2008-02-161-3/+3
| | | | | | | | This removes the global variable which is used to keep track of the currently selected repository, and adds a new variable in the cgit_context structure. Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* Add all config variables into struct cgit_contextLars Hjemli2008-02-161-5/+5
| | | | | | | | This removes another big set of global variables, and introduces the cgit_prepare_context() function which populates a context-variable with compile-time default values. Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* Introduce struct cgit_contextLars Hjemli2008-02-161-2/+2
| | | | | | | | | This struct will hold all the cgit runtime information currently found in a multitude of global variables. The first cleanup removes all querystring-related variables. Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* cache_safe_filename() needs more buffersLars Hjemli2007-05-181-4/+9
| | | | | | | | The single static buffer makes it impossible to use the result of two different calls to this function simultaneously. Fix it by using 4 buffers. Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* Enable url=value querystring parameterLars Hjemli2007-05-181-3/+6
| | | | | | | This makes is possible to use repo-urls like '/pub/scm/git/git.git' and even add path specifications, like '/pub/scm/git/git.git/log/documentation'. Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* Remove troublesome chars from cachefile namesLars Hjemli2007-01-121-0/+16
| | | | | | | Add a funtion cache_safe_filename() which replaces possibly bad filename characters with '_'. Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* Move cache_prepare() to cgitLars Hjemli2007-01-121-22/+0
| | | | | | This moves some cgit-specific stuff away from cache.c Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* Allow relative paths for cgit_cache_rootLars Hjemli2006-12-161-0/+4
| | | | | | | | | | Make sure we chdir(2) back to the original getcwd(2) when a page has been generated. Also, if the cgit_cache_root do not exist, try to create it. This is a feature intended to ease testing/debugging. Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* cache_lock: do xstrdup/free on lockfileLars Hjemli2006-12-121-1/+2
| | | | | | | | | | | | | | | Since fmt() uses 8 alternating static buffers, and cache_lock might call cache_create_dirs() multiple times, which in turn might call fmt() twice, after four iterations lockfile would be overwritten by a cachedirectory path. In worst case, this could cause the cachedirectory to be unlinked and replaced by a cachefile. Fix: use xstrdup() on the result from fmt() before assigning to lockfile, and call free(lockfile) before exit. Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* Don't truncate valid cachefilesLars Hjemli2006-12-111-0/+5
| | | | | | | | | | | | | | An embarrassing thinko in cgit_check_cache() would truncate valid cachefiles in the following situation: 1) process A notices a missing/expired cachefile 2) process B gets scheduled, locks, fills and unlocks the cachefile 3) process A gets scheduled, locks the cachefile, notices that the cachefile now exist/is not expired anymore, and continues to overwrite it with an empty lockfile. Thanks to Linus for noticing (again). Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* Avoid infinite loops in caching layerLars Hjemli2006-12-111-13/+22
| | | | | | | | | | | Add a global variable, cgit_max_lock_attemps, to avoid the possibility of infinite loops when failing to acquire a lockfile. This could happen on broken setups or under crazy server load. Incidentally, this also fixes a lurking bug in cache_lock() where an uninitialized returnvalue was used. Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* Fix cache algorithm loopholeLars Hjemli2006-12-111-1/+5
| | | | | | | | This closes the door for unneccessary calls to cgit_fill_cache(). Noticed by Linus. Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* Add license file and copyright noticesLars Hjemli2006-12-101-0/+8
| | | | Signed-off-by: Lars Hjemli <hjemli@gmail.com>
* Add caching infrastructureLars Hjemli2006-12-101-0/+86
This enables internal caching of page output. Page requests are split into four groups: 1) repo listing (front page) 2) repo summary 3) repo pages w/symbolic references in query string 4) repo pages w/constant sha1's in query string Each group has a TTL specified in minutes. When a page is requested, a cached filename is stat(2)'ed and st_mtime is compared to time(2). If TTL has expired (or the file didn't exist), the cached file is regenerated. When generating a cached file, locking is used to avoid parallell processing of the request. If multiple processes tries to aquire the same lock, the ones who fail to get the lock serves the (expired) cached file. If the cached file don't exist, the process instead calls sched_yield(2) before restarting the request processing. Signed-off-by: Lars Hjemli <hjemli@gmail.com>