Sprites cache manager pour gestion mémoire de grand nombre de sprites dans votre application (softvalues)

Soyez le premier à donner votre avis sur cette source.

Vue 10 671 fois - Téléchargée 221 fois

Description

Une classe de gestion de cache utile pour ceux qui aiment les applications ou les jeux avec bitmaps ou autre photos en temps réel.
Pour l'instant je ne sais pas si cette classe supporte vraiment tout type de situation. ;)
Elle gère parfaitement des petites animations en bitmaps ou dessins avec transparence.

>> :::::... :.... ::::::: ;;;::: 2006

(ajout: key file I/O, ou la gestion de la sécurité avec des fichiers sérialisés en clé; c.f. static File makeKeyFile(Serializable[], Serializable[], String, String, boolean))
(ajout: sept 2007 - le cache est à présent pleinement synchronisé sur les vues de collections de clefs et valeurs (aka Collections-views). de plus les accès en lecture/ecriture sont protégé par deux moniteurs de synchronisation. et enfin, les Reference's sont castées sur l'objet à référencer, ainsi que pour le pool d'allocation ReferenceQueue. )

Source / Exemple :


package sf3.system;

import installer.ExampleFileFilter;
import java.io.EOFException;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.OptionalDataException;
import java.io.OutputStream;
import java.io.RandomAccessFile;
import java.io.Serializable;
import java.lang.ref.Reference;
import java.lang.ref.ReferenceQueue;
import java.lang.ref.SoftReference;
import java.lang.reflect.InvocationTargetException;
import java.util.*;
import java.util.concurrent.ConcurrentMap;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;
import java.util.zip.ZipOutputStream;
import sf3.system.Threaded;

/** Management of cache for many objects, thus the errors of java heap space are avoided at the time of the execution:)

  • The listCapacity value will set the limit of elements simultaneously to be loaded into the heap memory space, then the default value 1 is the best choice if you enable swap (1 living element in the heap space while the other will be swapped into files). Other values will cause OutOfMemoryError when adding new elements if the heap memory space limit is over, then it would be recommended to use more frequentely the memorySensitiveCallback to load the elements before to add them.
  • Implementation note: Unlike other Maps, the Collections-views Iterators returned by this Map don't support directly removals, or removals won't be reflected to the base Map. Use the keys to make the correct removals!
  • /
public class SpritesCacheManager<K, V> implements ConcurrentMap<K, V>, SortedMap<K, V>, Serializable, Threaded { /** serial version uid of this class */ private static final long serialVersionUID = 2323; /** @see SpritesCacheListener
  • instances associated to the cache */
private transient Set<SpritesCacheListener> listeners = Collections.synchronizedSet(new HashSet<SpritesCacheListener>()); /** last thrown error */ private Throwable lastError; /** list capacity of the implemented maps*/ private int listCapacity; /***/ private int initialListCapacity; /** last recently used sprites */ private transient Map<K, V> lru = Collections.synchronizedMap(new HashMap<K, V>(listCapacity)); /** last recently used keys */ private transient List<K> lruK = Collections.synchronizedList(new Stack<K>()); /** most recently used sprites */ private transient Map<K, V> mru = Collections.synchronizedMap(new HashMap<K, V>(listCapacity)); /** most recentrly used keys */ private transient List<K> mruK = Collections.synchronizedList(new Stack<K>()); /** softly cached sprites (synchronizedMap)*/ private transient Map<K, SoftValue<K, V>> cache; /** references queue to poll by the Garbage Collector*/ public transient ReferenceQueue<? super V> _cacheBack = new ReferenceQueue<Object>(); /** file cached sprites */ private SortedMap<K, File> cacheDisk; /** compress switch @default false */ private boolean compress = false; /** swap switch @default false*/ private boolean swap = false; /** timer instance to use memory auto cleaning up*/ private java.util.Timer timer = null; /** file swapping extension @default sp3*/ protected String cacheDisk_ext = "sp3"; /** disk directory of file swapping @default .cache*/ protected String cacheDisk_dir = "cache"; /** files prefix */ protected String cacheDisk_prefix = "_cache"; /***/ protected boolean debug = true; /***/ protected transient boolean writing = false; /***/ protected transient CoalescedThreadsMonitor writeMonitor; /***/ protected transient boolean reading = false; /***/ protected transient CoalescedThreadsMonitor readMonitor; /***/ protected transient Vector<SoftReference<Map<K, V>>> subMaps = new Vector<SoftReference<Map<K, V>>>(); /***/ protected transient Vector<SoftReference<Set<K>>> keysSubLists = new Vector<SoftReference<Set<K>>>(); /***/ protected transient Vector<SoftReference<Set<V>>> valuesSubLists = new Vector<SoftReference<Set<V>>>(); /***/ public void setDebugEnabled(boolean b) { debug = b; } /***/ public boolean isDebugEnabled() { return debug; } /***/ private void debug(Object message) { if (debug) { System.err.println(message); } } /**
  • buffered values are referenced by Reference instances that are linked to the main ReferenceQueue so that the buffer is cleaned up every time cleanup() is called.* e.g. Object[] value = new Object[100];
  • this.buffer(value);
  • value = null; // then value is discarded within a short delay by the GarbageCollector.
*
  • @see #_cacheBack
  • @see #cleanup()
  • /
private transient Map<Integer, SoftValue> buffer = Collections.synchronizedMap(new HashMap<Integer, SoftValue>()); /***/ public SpritesCacheManager(Comparator<? super K> c, int capacity) { this(capacity); comparator = c; } /***/ public SpritesCacheManager(Comparator<? super K> c) { this(c, 1); } /** no arg contructor , memory will be limited to one living element in cache*/ public SpritesCacheManager() { this(1); } /** Contructs the cache with initialized capacity and a WeakHashMap for the main living cache.
  • @see #listCapacity
  • @see WeakHashMap
  • @param capacity cache memory init and sensitive limit
  • @see #memorySensitiveCallback(Object, Object)*/
public SpritesCacheManager(int capacity) { /*for(Method method : getClass().getMethods()) { for(Type generic : method.getGenericParameterTypes()) { for(Class<?> normal : method.getParameterTypes()) System.out.println(method.getName() + " : generics params : " + generic + " normal : " + normal); } }*/ File d; if (!(d = new File(cacheDisk_dir)).isDirectory()) { d.mkdirs(); } listCapacity = initialListCapacity = capacity; setThreadGroup(new CoalescedThreadsMonitor(getClass().getName() + " TG")); cache = Collections.synchronizedMap(new WeakHashMap<K, SoftValue<K, V>>(capacity)); buffer(cache); cacheDisk = Collections.synchronizedSortedMap(new TreeMap<K, File>()); buffer(cacheDisk); } /***/ public void setThreadGroup(ThreadGroup tg) { if (writeMonitor instanceof ThreadGroup) { writeMonitor.interrupt(); } if (readMonitor instanceof ThreadGroup) { readMonitor.interrupt(); } if (tg instanceof ThreadGroup) { writeMonitor = new CoalescedThreadsMonitor(tg, "TG-" + getClass().getName() + "-write"); readMonitor = new CoalescedThreadsMonitor(tg, "TG-" + getClass().getName() + "-read"); } else { writeMonitor = new CoalescedThreadsMonitor("TG-" + getClass().getName() + "-write"); readMonitor = new CoalescedThreadsMonitor("TG-" + getClass().getName() + "-read"); } writeMonitor.setCoalesceEnabled(false); readMonitor.setCoalesceEnabled(false); } /***/ public ThreadGroup getThreadGroup() { return writeMonitor.getParent(); } /***/ public int getInitialListCapacity() { return initialListCapacity; } /** trunks the cache maps to fit lists capacity
  • @see #listCapacity
  • @param ru recently used map
  • @see #lru
  • @param ruK recently used keys
  • @see #lruK
  • @param n desired trunk-size*/
private void trunk(Map<K, V> ru, List<K> ruK, int n) { while (ru.size() > n) { ru.remove(ruK.remove(ruK.size() - 1)); // fit to cache capacity, pop oldest added entry } cleanup(); } /** the memory sensitive update to the cache Recently Used lists/maps. overflow-protected by a memory sensitive-calback
  • @see #memorySensitiveCallback(Object, Object)
  • @param key the key to update to
  • @param sp the associated object to update to*/
private void memorySensitiveUpdate(K key, V sp) throws Throwable { memorySensitiveCallback("memoryUpdate", this, new Object[]{key, sp}, new Class[]{Object.class, Object.class}); } /** memory lists are updated to the given pair K,V
  • outdated elements are removed. this method needs to be overflow-protected, so we use a memory sensitive-callback to avoid overflow.
  • @see #memorySensitiveUpdate(Object, Object)
  • @param key the key to update to
  • @param sp the associated object to update to*/
public void memoryUpdate(K key, V sp) { int currentPty = Thread.currentThread().getPriority(); Thread.currentThread().setPriority(Thread.MAX_PRIORITY); if (lru.containsKey(key)) { // if the same sprite has been found in last used items then it will be added to the most used items mru.put(key, sp); mruK.add(0, key); trunk(mru, mruK, listCapacity); } // here we update last recently used list lru.put(key, sp); lruK.add(0, key); trunk(lru, lruK, listCapacity); Thread.currentThread().setPriority(currentPty); } /** removes strong references and key references to object , thus it can be garbage collected . overflow-protected by a memory sensitive-callback.
  • @see #memorySensitiveCallback(Object, Object)
  • @param sp the object to be cleared of memory
  • @return the cleared object (same as the parameter)
  • /
private V memorySensitiveClear(V sp) throws Throwable { return (V) memorySensitiveCallback("memoryClear", this, new Object[]{sp}, new Class[]{Object.class}); } /** removes all strong references to the given Object. not overflow-protected, we use a memory sensitive-callback.
  • @see #memorySensitiveClear(Object)
  • @param sp the object to be cleared of memory
  • @return the cleared object (same as the parameter)*/
public V memoryClear(V sp) { int currentPty = Thread.currentThread().getPriority(); Thread.currentThread().setPriority(Thread.MAX_PRIORITY); if (lru.containsValue(sp)) { Set<Map.Entry<K, V>> set = lru.entrySet(); synchronized (lru) { Iterator<Map.Entry<K, V>> i0 = set.iterator(); for (buffer(i0); i0.hasNext();) { // check for any value in LRU Map.Entry<K, V> entry = i0.next(); buffer(entry); V value = (entry != null) ? entry.getValue() : null; buffer(value); if (value != null) { if (value.equals(sp)) { synchronized (lruK) { Iterator<K> i1 = lruK.iterator(); for (buffer(i1); i1.hasNext();) { // also remove keys in stack if (i1.next().equals(entry.getKey())) { i1.remove(); } } i0.remove(); } } } } } } if (mru.containsValue(sp)) { Set<Map.Entry<K, V>> set = mru.entrySet(); synchronized (mru) { Iterator<Map.Entry<K, V>> i0 = set.iterator(); for (buffer(i0); i0.hasNext();) { // check for any value in MRU Map.Entry<K, V> entry = i0.next(); buffer(entry); V value = (entry != null) ? entry.getValue() : null; buffer(value); if (value != null) { if (value.equals(sp)) { synchronized (mruK) { Iterator<K> i1 = mruK.iterator(); for (buffer(i1); i1.hasNext();) { // also remove keys in stack if (i1.next().equals(entry.getKey())) { i1.remove(); } } i0.remove(); } } } } } } Thread.currentThread().setPriority(currentPty); return sp; } /** clears the mapping memory */ protected void clearMemory() { mru.clear(); mruK.clear(); lru.clear(); lruK.clear(); } /** compresses cache object
  • @param pSp the object to compress
  • @return compressed byte array of the argument */
private void compress(Serializable sp, OutputStream out) { int pty = Thread.currentThread().getPriority(); Thread.currentThread().setPriority(Thread.MAX_PRIORITY); cleanup(); buffer(sp); buffer(out); try { ZipEntry ze = new ZipEntry("sp_" + sp.hashCode()); ZipOutputStream zip = new ZipOutputStream(out); zip.putNextEntry(ze); buffer(ze); buffer(zip); ObjectOutputStream oos = new ObjectOutputStream(zip); oos.writeObject(sp); zip.closeEntry(); zip.finish(); /*zip.close(); oos.close();*/ debug("Compressed Sprite caching " + ze.getCompressedSize() + " !"); } catch (IOException ex) { ex.printStackTrace(); } finally { Thread.currentThread().setPriority(pty); } } /** return the synchronized map of file swapping
  • @return file swapping map that has been synchronized */
public Map<K, File> getSwapMap() { return cacheDisk; } /** uncompresses cache object
  • @param b the compressed data to use
  • @return the uncompressed object*/
private Serializable uncompress(InputStream in) throws NullPointerException { int pty = Thread.currentThread().getPriority(); Thread.currentThread().setPriority(Thread.MAX_PRIORITY); cleanup(); Serializable o = null; ZipInputStream zis = new ZipInputStream(in); buffer(zis); ObjectInputStream ois; try { zis.getNextEntry(); ois = new ObjectInputStream(zis); buffer(ois); o = (Serializable) ois.readObject(); buffer(o); zis.closeEntry(); /*zis.close(); ois.close();*/ } catch (EOFException e) { debug("CacheEntry: uncompress: done."); } catch (IOException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } finally { zis = null; ois = null; if (o == null) { throw new NullPointerException("Null CacheEntry: cannot uncompress!"); } Thread.currentThread().setPriority(pty); return o; } } /** writes the given mapping to swap. this method doesn't need to be synchronized or overflow-protected otherwise it will cause a dead-lock.
  • @param pKey the key
  • @param pValue the associated object
  • @return true or false whether it has succeeded*/
private boolean writeSwap(K key, Serializable value) { int pty = Thread.currentThread().getPriority(); Thread.currentThread().setPriority(Thread.MAX_PRIORITY); cleanup(); notifyWrite(WRITE_STARTED); buffer(key); buffer(value); File f = null; RandomAccessFile raf = null; ObjectOutputStream oos; FileOutputStream fos; final boolean compress0 = compress; boolean interrupt_ = false; try { synchronized (readMonitor.getMonitor(false)) { while (reading) { readMonitor.waitOnMonitor(10); } writing = true; File dir = new File(cacheDisk_dir); dir.mkdirs(); f = File.createTempFile(cacheDisk_prefix + hashCode(), key + "." + cacheDisk_ext, dir); buffer(f); raf = new RandomAccessFile(f, "rws"); fos = new FileOutputStream(raf.getFD()); //FileLock fl = fos.getChannel().lock(); oos = new ObjectOutputStream(fos); buffer(oos); buffer(fos); oos.writeBoolean(compress0); if (compress0) { compress(value, fos); } else { oos.writeObject(value); } //fl.release(); oos.close(); cacheDisk.put(key, f); f.deleteOnExit(); notifyWrite(WRITE_COMPLETED); oos = null; fos = null; value = null; Thread.currentThread().setPriority(pty); readMonitor.notifyOnMonitor(); } writing = false; synchronized (writeMonitor.getMonitor(false)) { writeMonitor.notifyAllOnMonitor(); } return true; } catch (Exception e) { if (e instanceof InterruptedException) { interrupt_ = true; } e.printStackTrace(); lastError = e; notifyWrite(WRITE_ERROR); if (f != null) { f.delete(); notifyWrite(WRITE_ABORTED); } writing = false; synchronized (writeMonitor.getMonitor(false)) { writeMonitor.notifyAllOnMonitor(); } if (interrupt_) { Thread.currentThread().interrupt(); } return false; } } /** reads swap. this method doesn't need to be synchronized or overflow-protected otherwise it will cause a dead-lock.
  • @param pKey the key to read from
  • @return the value-object read*/
private V readSwap(K key) { int pty = Thread.currentThread().getPriority(); Thread.currentThread().setPriority(Thread.MAX_PRIORITY); cleanup(); notifyRead(READ_STARTED); V value = null; File f = null; RandomAccessFile raf = null; ObjectInputStream ois; FileInputStream fis; boolean interrupt_ = false; try { synchronized (writeMonitor.getMonitor(false)) { while (writing) { writeMonitor.waitOnMonitor(10); } reading = true; if (cacheDisk.containsKey(key)) { debug("Swap: Found Key " + key + " "); f = (File) cacheDisk.get(key); debug(f.getCanonicalPath()); raf = new RandomAccessFile(f, "r"); fis = new FileInputStream(raf.getFD()); //FileLock fl = fis.getChannel().lock(0L, Long.MAX_VALUE, true); debug("opening..."); ois = new ObjectInputStream(fis); buffer(ois); buffer(fis); if (ois.readBoolean()) { value = (V) uncompress(fis); } else { value = (V) ois.readObject(); } //fl.release(); buffer(value); ois.close(); } writeMonitor.notifyOnMonitor(); } } catch (Exception e) { if (e instanceof OptionalDataException) { OptionalDataException opt = (OptionalDataException) e; debug("OPTIONAL DATA EXCEPTION : eof=" + opt.eof + " bytes=" + opt.length); } else if (e instanceof InterruptedException) { interrupt_ = true; } e.printStackTrace(); lastError = e; notifyRead(READ_ERROR); } finally { ois = null; fis = null; if (value == null) { notifyRead(READ_ABORTED); } else { debug("Swap: reading done."); notifyRead(READ_COMPLETED); } Thread.currentThread().setPriority(pty); reading = false; synchronized (readMonitor.getMonitor(false)) { readMonitor.notifyAllOnMonitor(); } if (interrupt_) { Thread.currentThread().interrupt(); } return value; } } /** removes file swapping for the given key
  • @param key the key to remove file swap
  • @return true or false whether it has succeeded or not*/
private boolean unswap(K key) { File f = cacheDisk.remove(key); buffer(f); try { if (f instanceof File) { f.delete(); return true; } else { return false; } } catch (Exception e) { e.printStackTrace(); return false; } } /** enable compression of the cache entries
  • @param b compression dis/enabled*/
public void setCompressionEnabled(boolean b) { compress = b; } /***/ private Thread getShutdownHooker() { return new Thread(new Runnable() { public void run() { SpritesCacheManager.this.cleanFileSwap(); } }); } /** enable file swap to disk of the cache entries (recommended)
  • @param b swapping dis/enabled */
public void setSwapDiskEnabled(boolean b) { if (b) { if (!swap) { Runtime.getRuntime().addShutdownHook(getShutdownHooker()); } } else { Runtime.getRuntime().removeShutdownHook(getShutdownHooker()); } swap = b; } /** checks whether swapping is enabled or not.
  • @see #setSwapDiskEnabled(boolean)
  • @return true or false*/
public boolean isSwapDiskEnabled() { return swap; } /** current cache open size
  • @return size of the living cache (excluding file swap cache)
  • @see #cacheDisk
  • @see #cache*/
public int size() { try { cleanup(); } catch (Exception e) { } finally { return cache.size(); } } /** memory heap allocation size in cache calculated in percentage of total capacity
  • @return the current memory allocation in percent of the lists capactity
  • @see #listCapacity*/
public double allocSize() { double d = 100 * cache.size() / listCapacity; debug("alloc=" + d); return d; } /** adds the given mapping to cache
  • @see #setSwapDiskEnabled(boolean)
  • @param key the key
  • @param obj the associated object
  • @return the privous mapped object or null*/
public V add(K key, V obj) { return add(key, obj, swap); } /***/ private void updateSubMaps(boolean removal, K key, V value) { for (Iterator<SoftReference<Map<K, V>>> i = subMaps.iterator(); i.hasNext();) { SoftReference<Map<K, V>> ref = i.next(); Map<K, V> subMap = (Map<K, V>) ref.get(); if (subMap instanceof Map) { if (removal) { subMap.remove(key); } else { subMap.put(key, value); } } } } /***/ private void updateSubListsK(boolean removal, K key) { for (Iterator<SoftReference<Set<K>>> i = keysSubLists.iterator(); i.hasNext();) { SoftReference<Set<K>> listRef = i.next(); Set<K> subList = (Set<K>) listRef.get(); if (subList instanceof Set) { if (removal) { subList.remove(key); } else { subList.add(key); } } } } /***/ private void updateSubListsV(boolean removal, V value) { for (Iterator<SoftReference<Set<V>>> i = valuesSubLists.iterator(); i.hasNext();) { SoftReference<Set<V>> listRef = i.next(); Set<V> subList = (Set<V>) listRef.get(); if (subList instanceof Set) { if (removal) { subList.remove(value); } else { subList.add(value); } } } } /** adds the mapping to cache with the swap option
  • @param key the key
  • @param obj the associated object
  • @param swap option to dis/enabled swap to this key
  • @return the privous mapped object or null*/
protected V add(K key, V obj, boolean swap) { cleanup(); SoftValue<K, V> ancient = null; try { if (obj != null) { memorySensitiveUpdate(key, obj); ancient = (SoftValue<K, V>) cache.put(key, new SoftValue(obj, key, _cacheBack)); buffer(ancient); updateSubMaps(false, key, obj); updateSubListsK(false, key); updateSubListsV(false, obj); if (ancient != null) { V v = (V) ancient.get(); if (v != null) { if (!v.equals(obj)) { memorySensitiveClear((V) ancient.get()); } } } if (swap) { if (!writeSwap(key, (Serializable) obj)) { throw new Exception("Unable to swap! " + key); } } } } catch (Exception e) { debug("CacheEntry: error while caching " + key + e.getMessage() + "\r\n"); e.printStackTrace(); } finally { return (ancient instanceof SoftReference) ? (V) ancient.get() : null; } } /** safely removes Object from cache (it can be either a key referencing an object or an object currently cached)
  • @param sp can be either a key or a value; if the key-class is the same as value-class then it will be used as a key
  • @return the removed object
  • /
public V remove(Object sp) { try { K key = (K) sp; SoftValue<K, V> ref = cache.remove(key); updateSubMaps(true, key, null); updateSubListsK(true, key); updateSubListsV(true, (V) ref.get()); V value = (ref instanceof SoftValue) ? (V) ref.get() : null; synchronized (lruK) { Iterator<K> i = lruK.iterator(); for (buffer(i); i.hasNext();) { if (i.next().equals(key)) { i.remove(); } } } synchronized (mruK) { Iterator<K> i = mruK.iterator(); for (buffer(i); i.hasNext();) { if (i.next().equals(key)) { i.remove(); } } } lru.remove(key); mru.remove(key); if (swap) { unswap(key); } return value; } catch (ClassCastException e) { try { return memorySensitiveClear((V) sp); } catch (Throwable t) { t.printStackTrace(); return null; } } } /** Returns the k-referenced object allocated in memory-cache
  • @param k the key that references the object
  • @return V the referenced object*/
public V get(Object k) { cleanup(); K key = (K) k; V value = null; if (cache.containsKey(key)) { debug("retrieving softValue referent"); value = (V) cache.get(key).get(); } else if (swap && cacheDisk.containsKey(key)) { debug("Get key " + key + " from swap..."); value = readSwap(key); add(key, value, false); } if (value != null) { try { memorySensitiveUpdate(key, value); } catch (Throwable t) { t.printStackTrace(); } } return value; } /** Checks for this key availability in memory
  • @param key the key to look for
  • @return true or false */
public boolean has(K key) { cleanup(); return (cache.containsKey(key)) ? true : ((swap) ? cacheDisk.containsKey(key) : false); } /***/ public void clearMemorySwap() { if (swap) { Set<Map.Entry<K, File>> set = cacheDisk.entrySet(); synchronized (cacheDisk) { for (Iterator<Map.Entry<K, File>> i = set.iterator(); i.hasNext();) { Map.Entry<K, File> entry = i.next(); File f = entry.getValue(); f.delete(); cacheDisk.remove(entry.getKey()); } } } } /***/ private void clearSubMaps() { for (Iterator<SoftReference<Map<K, V>>> i = subMaps.iterator(); i.hasNext();) { SoftReference<Map<K, V>> mapRef = i.next(); Map<K, V> subMap = (Map<K, V>) mapRef.get(); if (subMap instanceof Map) { subMap.clear(); } } } /***/ private void clearSubListsK() { for (Iterator<SoftReference<Set<K>>> i = keysSubLists.iterator(); i.hasNext();) { SoftReference<Set<K>> listRef = i.next(); Set<K> subList = (Set<K>) listRef.get(); if (subList instanceof Set) { subList.clear(); } } } /***/ private void clearSubListsV() { for (Iterator<SoftReference<Set<V>>> i = valuesSubLists.iterator(); i.hasNext();) { SoftReference<Set<V>> listRef = i.next(); Set<V> subList = (Set<V>) listRef.get(); if (subList instanceof Set) { subList.clear(); } } } /** clear memory allocation maps */ public void clear() { clearSubMaps(); clearSubListsK(); clearSubListsV(); clearMemory(); cache.clear(); cleanup(); } /** safely cleans up the cache to clear unused referenced-objects */ public void cleanup() { int currentPty = Thread.currentThread().getPriority(); Thread.currentThread().setPriority(Thread.MAX_PRIORITY); Reference ref; while ((ref = _cacheBack.poll()) != null) { try { if (ref instanceof SoftValue) { K key = ((SoftValue<K, V>) ref).key; cache.remove(key); buffer.remove(key); } if (ref instanceof Reference) { ref.clear(); } debug("SoftCache cleaned up"); } catch (Exception e) { debug("SoftCache cleanup error :"); e.printStackTrace(); } } Thread.currentThread().setPriority(currentPty); } /** Enables cleanup of garbage queue originally done by the Garbage Collector (enabling this can cause a global thread overflow)
  • @see #cleanup()
  • @param b automatic cleanup dis/enabled */
public void setAutoCleanupEnabled(boolean b) { if (timer != null) { timer.cancel(); timer.purge(); } if (b) { timer = new java.util.Timer("Timer-SpritesCacheManager-Cleanup"); timer.scheduleAtFixedRate(new TimerTask() { public void run() { Thread.currentThread().setPriority(Thread.NORM_PRIORITY + 1); cleanup(); } }, 0L, 20); buffer(timer); } else { timer = null; } } /***/ public void ensureListCapacity(int n) { listCapacity = Math.max(listCapacity, n); notifyEvent(CAPACITY_EXTENDED); } /** adjust cache to a given capacity (amount of objects that will stay alive unless they cause an heap overflow)
  • @param n amount of heap objects to keep alive*/
public void trimCacheLive(int n) { int pty = Thread.currentThread().getPriority(); Thread.currentThread().setPriority(Thread.MAX_PRIORITY); trunk(mru, mruK, n); trunk(lru, lruK, n); listCapacity = n; notifyEvent(CAPACITY_EXTENDED); Thread.currentThread().setPriority(pty); } /** Callbacks to invoked function
  • @param method the method named to make the callback
  • @param target the targeted object by this method callback
  • @param args the Objects arguments of the method callback
  • @param clargs the Class arguments of the method callback
  • @return usual returned value of the called back method*/
public static Object callback(String method, Object target, Object[] args, Class[] clargs) throws NoSuchMethodException, IllegalAccessException, InvocationTargetException { System.err.println("callback on " + target.getClass().getName() + "." + method + "() : target.toString()=" + target); return target.getClass().getMethod(method, clargs).invoke(target, args); } /** Sets priority to invoked task
  • @see Thread#setPriority(int)
  • @param method the method named to make the callback
  • @param target the targeted object by this method callback
  • @param args the Objects arguments of the method callback
  • @param clargs the Class arguments of the method callback
  • @param level priority value from 0 to 9. (normal value is that one of Thread.NORM_PRIORITY)
  • @return usual returned value of the called back method
  • */
public static Object doPriority(String method, Object target, Object[] args, Class[] clargs, int level) throws IllegalAccessException, InvocationTargetException, NoSuchMethodException { int currentPty = Thread.currentThread().getPriority(); Thread.currentThread().setPriority(level); Object result = callback(method, target, args, clargs); Thread.currentThread().setPriority(currentPty); return result; } /** This is the second important function of this cache. It calls back to invoked method with memory sensitivity selfcare, i.e. memory will not cause a crash of the JVM due to the memory limit defined by the listCapacity value.
  • And eventually the callback will throw OutOfMemory to the cache, which is able to clear heap memory overflows. Least recently used elements will be discarded to clear memory space if they're not referenced elsewhere than in the cache.
  • Caution of use: you SHOULD NOT use a callback method with any of the present cache class-methods, because of synchronization issues with Java that locks the Map instance used by this cache.
  • @param method the method named to make the callback
  • @param target the targeted object by this method callback
  • @param args the Objects arguments of the method callback
  • @param clargs the Class arguments of the method callback
  • @return usual returned value of the called back method*/
public Object memorySensitiveCallback(String method, Object target, Object[] args, Class[] clargs) throws Throwable { try { return callback(method, target, args, clargs); } catch (NoSuchMethodException ex) { ex.printStackTrace(); return null; } catch (IllegalArgumentException ex) { ex.printStackTrace(); return null; } catch (IllegalAccessException ex) { ex.printStackTrace(); return null; } catch (InvocationTargetException ex) { Throwable t = ex.getTargetException(); buffer(t); if (t instanceof OutOfMemoryError) { if (listCapacity == 0) { throw (OutOfMemoryError) t; } else if (listCapacity == 1) { clearMemory(); } else { trimCacheLive((int) Math.floor((float)listCapacity / 2.0f)); } cleanup(); return memorySensitiveCallback(method, target, args, clargs); } else { throw t; } } } /** overriden finalization method that clears the cache and stops any running activity*/ protected void finalize() throws Throwable { if (timer != null) { timer.cancel(); timer = null; } clear(); super.finalize(); } /** cleans up the file swapping directory
  • @see #cacheDisk_dir*/
public void cleanFileSwap() { File dir = null; dir = new File(cacheDisk_dir); ExampleFileFilter ff = new ExampleFileFilter(); ff.addExtension(cacheDisk_ext); File[] swapFiles = null; if (dir.isDirectory()) { swapFiles = dir.listFiles((java.io.FileFilter) ff); } for (File f : swapFiles) { try { f.delete(); } catch (Exception e) { e.printStackTrace(); } } } /** Buffers the object to avoid memory freeze when it has to be cleared off the memory.
  • @discussion the buffer is saved into a synchronized WeakHashMap that refreshes itself through ReferenceQueue.poll()
  • @param obj the object to "buffer"
  • @see SoftValue
  • /
public void buffer(Object obj) { int pty = Thread.currentThread().getPriority(); Thread.currentThread().setPriority(Thread.MAX_PRIORITY); cleanup(); if (obj != null) { buffer.put(obj.hashCode(), new SoftValue(obj, obj.hashCode(), _cacheBack)); } Thread.currentThread().setPriority(pty); } /** checks for the cache if it's empty
  • @return true or false*/
public boolean isEmpty() { return cache.isEmpty(); } /** checks for a reference availability
  • @param key the key to check for
  • @return true or false */
public boolean containsKey(Object key) { return has((K) key); } /** checks for a reference availability
  • @param object the object to check for
  • @return true or false*/
public boolean containsValue(Object object) { Collection<SoftValue<K, V>> coll = cache.values(); synchronized (cache) { for (Iterator<SoftValue<K, V>> i = coll.iterator(); i.hasNext();) { Object ref = i.next().get(); if (ref != null) { if (ref.equals(object)) { return true; } } } } return false; } /** adds a new mapping
  • @see #add(Object, Object)
  • @param key the key
  • @param value the object
  • @return the previous mapped object or null*/
public V put(K key, V value) { return add(key, value); } /** adds a complete map to the cache
  • @param map a map */
public void putAll(Map<? extends K, ? extends V> map) { Map<? extends K, ? extends V> smap = Collections.synchronizedMap(map); Set<? extends K> set = smap.keySet(); synchronized (smap) { for (Iterator<? extends K> i = set.iterator(); i.hasNext();) { K k = i.next(); put(k, smap.get(k)); } } } /** returns the key-set of the cache
  • @return the key-set*/
public Set<K> keySet() { TreeSet<K> sortedKeys = new TreeSet<K>(comparator); Set<K> set = (swap) ? cacheDisk.keySet() : cache.keySet(); synchronized ((swap) ? cacheDisk : cache) { for (Iterator<K> i = set.iterator(); i.hasNext();) { sortedKeys.add(i.next()); } } keysSubLists.add(new SoftReference(sortedKeys, _cacheBack)); return (Set<K>) sortedKeys; } /** retuns the cached objects collection
  • @return collection of objects that are currently cached*/
public Collection<V> values() { HashSet<V> cached = new HashSet<V>(); Set<K> set = (swap) ? cacheDisk.keySet() : cache.keySet(); synchronized ((swap) ? cacheDisk : cache) { for (Iterator<K> i = set.iterator(); i.hasNext();) { K key = i.next(); cached.add(get(key)); } } valuesSubLists.add(new SoftReference(cached, _cacheBack)); return (Collection<V>) cached; } /** the mapping entries
  • @return the mapping entries containing the cached objects*/
public Set<Map.Entry<K, V>> entrySet() { SortedMap<K, V> m = new TreeMap<K, V>(comparator); Set<K> set = (swap) ? cacheDisk.keySet() : cache.keySet(); synchronized ((swap) ? cacheDisk : cache) { for (Iterator<K> i = set.iterator(); i.hasNext();) { K key = i.next(); m.put(key, get(key)); } } subMaps.add(new SoftReference(m, _cacheBack)); return m.entrySet(); } /** writes the entire cache the the output.
  • @param out the output */
private void writeObject(ObjectOutputStream out) throws IOException { int pty = Thread.currentThread().getPriority(); Thread.currentThread().setPriority(Thread.MAX_PRIORITY); out.defaultWriteObject(); Set<Map.Entry<K, File>> set = cacheDisk.entrySet(); synchronized (cacheDisk) { for (Iterator<Map.Entry<K, File>> files = set.iterator(); files.hasNext();) { Map.Entry<K, File> entry = files.next(); buffer(entry); File f = entry.getValue(); buffer(f); RandomAccessFile raf = new RandomAccessFile(f, "r"); FileInputStream fis = new FileInputStream(raf.getFD()); buffer(fis); buffer(raf); //FileLock fl = fis.getChannel().lock(0L, Long.MAX_VALUE, true); // write swap : file name to recover swap < file length < file out.writeObject(entry.getKey()); out.writeLong(f.length()); debug(f.length()); try { byte[] b = new byte[(int) 512]; int readBytes = 0; int writtenBytes = 0; while ((readBytes = fis.read(b)) != -1) { out.write(b, 0, readBytes); writtenBytes += readBytes; } debug(writtenBytes); // fl.release(); fis.close(); raf.close(); } catch (Exception e) { e.printStackTrace(); } finally { fis = null; raf = null; cleanup(); } } } Thread.currentThread().setPriority(pty); } /** reads the entire cache from the input.
  • @param in input*/
private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException { int pty = Thread.currentThread().getPriority(); Thread.currentThread().setPriority(Thread.MAX_PRIORITY); in.defaultReadObject(); keysSubLists = new Vector<SoftReference<Set<K>>>(); valuesSubLists = new Vector<SoftReference<Set<V>>>(); subMaps = new Vector<SoftReference<Map<K, V>>>(); writing = false; reading = false; setThreadGroup(new CoalescedThreadsMonitor(getClass().getName() + " TG")); listeners = Collections.synchronizedSet(new HashSet<SpritesCacheListener>()); _cacheBack = new ReferenceQueue<Object>(); buffer = Collections.synchronizedMap(new HashMap<Integer, SoftValue>()); buffer(buffer); buffer(_cacheBack); cache = Collections.synchronizedMap(new WeakHashMap<K, SoftValue<K, V>>(listCapacity)); buffer(cache); lru = Collections.synchronizedMap(new HashMap<K, V>()); buffer(lru); lruK = Collections.synchronizedList(new Stack<K>()); buffer(lruK); mru = Collections.synchronizedMap(new HashMap<K, V>()); buffer(mru); mruK = Collections.synchronizedList(new Stack<K>()); buffer(mruK); File file = null; boolean read = true; do { RandomAccessFile raf = null; ObjectOutputStream oos = null; FileOutputStream fos = null; try { file = cacheDisk.get((K) in.readObject()); long len = in.readLong(); buffer(len); if (file.exists()) { if (file.length() == len) { in.skip(len); continue; } } file.deleteOnExit(); debug("read " + file); raf = new RandomAccessFile(file, "rws"); raf.getChannel().truncate(0); fos = new FileOutputStream(raf.getFD()); buffer(fos); buffer(raf); //FileLock fl = fos.getChannel().lock(); byte[] b = new byte[(int) 512]; int readBytes = 0; int rBytes; while ((rBytes = in.read(b)) != -1) { fos.write(b, 0, rBytes); readBytes += rBytes; } debug("bytes read: " + readBytes + " file length was: " + len); //fl.release(); raf.close(); fos.close(); } catch (EOFException e) { e.printStackTrace(); read = false; } catch (Exception e) { read = false; if (e instanceof OptionalDataException) { OptionalDataException opt = (OptionalDataException) e; in.skipBytes(opt.length); debug("skip" + opt.length); read = true; if (opt.eof) { read = false; } } else { e.printStackTrace(); } } finally { oos = null; fos = null; raf = null; cleanup(); } } while (read); debug("SpritesCacheManager read from DataStream!"); Thread.currentThread().setPriority(pty); } /** adds a listener to the cache.
  • @param l the listener*/
public void addSpritesCacheListener(SpritesCacheListener l) { listeners.add(l); } /** removes one listener from the cache
  • @param l the listener*/
public void removeSpritesCacheListener(SpritesCacheListener l) { listeners.remove(l); } /** notifies all listeners of any write event
  • @param event the event
  • @see #WRITE_STARTED
  • @see #WRITE_ABORTED
  • @see #WRITE_COMPLETED
  • @see #WRITE_ERROR*/
public void notifyWrite(int event) { switch (event) { case WRITE_STARTED: synchronized (listeners) { for (Iterator<SpritesCacheListener> i = listeners.iterator(); i.hasNext();) { SpritesCacheListener l = i.next(); if (l != null) { l.writeStarted(); } } } break; case WRITE_ABORTED: synchronized (listeners) { for (Iterator<SpritesCacheListener> i = listeners.iterator(); i.hasNext();) { SpritesCacheListener l = i.next(); if (l != null) { l.writeAborted(); } } } break; case WRITE_COMPLETED: synchronized (listeners) { for (Iterator<SpritesCacheListener> i = listeners.iterator(); i.hasNext();) { SpritesCacheListener l = i.next(); if (l != null) { l.writeCompleted(); } } } break; case WRITE_ERROR: synchronized (listeners) { for (Iterator<SpritesCacheListener> i = listeners.iterator(); i.hasNext();) { SpritesCacheListener l = i.next(); if (l != null) { l.writeError((lastError != null) ? lastError.getStackTrace() : null); } } } break; default: } } /** notifies the listeners of any read event
  • @param event the event
  • @see #READ_STARTED
  • @see #READ_ABORTED
  • @see #READ_COMPLETED
  • @see #READ_ERROR*/
public void notifyRead(int event) { switch (event) { case READ_STARTED: synchronized (listeners) { for (Iterator<SpritesCacheListener> i = listeners.iterator(); i.hasNext();) { SpritesCacheListener l = i.next(); if (l != null) { l.readStarted(); } } } break; case READ_ABORTED: synchronized (listeners) { for (Iterator<SpritesCacheListener> i = listeners.iterator(); i.hasNext();) { SpritesCacheListener l = i.next(); if (l != null) { l.readAborted(); } } } break; case READ_COMPLETED: synchronized (listeners) { for (Iterator<SpritesCacheListener> i = listeners.iterator(); i.hasNext();) { SpritesCacheListener l = i.next(); if (l != null) { l.readCompleted(); } } } break; case READ_ERROR: synchronized (listeners) { for (Iterator<SpritesCacheListener> i = listeners.iterator(); i.hasNext();) { SpritesCacheListener l = i.next(); if (l != null) { l.readError((lastError != null) ? lastError.getStackTrace() : null); } } } break; default: } } /** a "read started" event */ static final int READ_STARTED = 0; /** a "read aborted" event */ static final int READ_ABORTED = 1; /** a "read completed" event */ static final int READ_COMPLETED = 2; /** a "read error" event */ static final int READ_ERROR = 3; /** a "write started" event */ static final int WRITE_STARTED = 4; /** a "write aborted" event */ static final int WRITE_ABORTED = 5; /** a "write completed" event */ static final int WRITE_COMPLETED = 6; /** a "write error" event */ static final int WRITE_ERROR = 7; /** a "capacity extended" event */ static final int CAPACITY_EXTENDED = 8; /** notifies the listeners for any event
  • @see #notifyRead(int)
  • @see #notifyWrite(int) */
public void notifyEvent(int event) { switch (event) { case READ_STARTED: case READ_ABORTED: case READ_COMPLETED: case READ_ERROR: notifyRead(event); break; case WRITE_STARTED: case WRITE_ABORTED: case WRITE_COMPLETED: case WRITE_ERROR: notifyWrite(event); break; case CAPACITY_EXTENDED: synchronized (listeners) { for (Iterator<SpritesCacheListener> i = listeners.iterator(); i.hasNext();) { SpritesCacheListener l = i.next(); if (l instanceof SpritesCacheListener) { l.capacityExtended(listCapacity); } } } break; default: } } /***/ public static File makeKeyFile(Serializable key, File f, String fileDir, String keyFilename, boolean compress) { try { FileInputStream fis = new FileInputStream(new RandomAccessFile(f, "r").getFD()); byte[] b = new byte[512]; SpritesCacheManager<Integer, byte[]> fileContents = new SpritesCacheManager<Integer, byte[]>((int) f.length()); fileContents.setSwapDiskEnabled(true); int i = 0; int readBytes; while ((readBytes = fis.read(b)) != -1) { byte[] rb = new byte[readBytes]; for (int j = 0; j < rb.length; j++) { rb[j] = b[j]; } fileContents.put(i++, rb); } fis.close(); return makeKeyFile(new Serializable[]{key}, new SpritesCacheManager[]{fileContents}, fileDir, keyFilename, compress); } catch (FileNotFoundException ex) { ex.printStackTrace(); return null; } catch (IOException ex) { ex.printStackTrace(); return null; } } /***/ public static File makeKeyFile(Serializable[] key, Serializable[] serialData, String fileDir, String keyFilename, boolean compress) throws IndexOutOfBoundsException { if (key.length != serialData.length) { throw new IndexOutOfBoundsException("Keys and serial datas must be of the same length. there are " + key.length + " keys and " + serialData.length + " serial datas"); } SpritesCacheManager<Serializable, Serializable> spm = new SpritesCacheManager<Serializable, Serializable>(); spm.setSwapDiskEnabled(true); spm.setCompressionEnabled(compress); for (int i = 0; i < key.length; i++) { spm.put(key[i], serialData[i]); } new File(fileDir).mkdirs(); spm.cacheDisk_dir = fileDir; File keyFile = new File(fileDir + File.separator + keyFilename); ObjectOutputStream out; try { out = new ObjectOutputStream(new FileOutputStream(new RandomAccessFile(keyFile, "rws").getFD())); out.writeObject(spm); out.close(); return keyFile; } catch (FileNotFoundException ex) { ex.printStackTrace(); return null; } catch (IOException ex) { ex.printStackTrace(); return null; } } /***/ public static File extractKeyFile(File f, Serializable key, String extractFilename) { try { SpritesCacheManager<Integer, byte[]> serial = (SpritesCacheManager<Integer, byte[]>) extractKeyData(f, key); if (serial == null) { return null; } File xF = new File(extractFilename); FileOutputStream fos = new FileOutputStream(new RandomAccessFile(xF, "rws").getFD()); for (int i = 0; i < serial.getSwapMap().size(); i++) { fos.write(serial.readSwap(i)); } fos.close(); return xF; } catch (FileNotFoundException ex) { ex.printStackTrace(); return null; } catch (IOException ex) { ex.printStackTrace(); return null; } } /***/ public static Serializable extractKeyData(File f, Serializable key) { ObjectInputStream in; try { in = new ObjectInputStream(new FileInputStream(new RandomAccessFile(f, "rws").getFD())); SpritesCacheManager<Serializable, Serializable> spm; spm = (SpritesCacheManager<Serializable, Serializable>) in.readObject(); in.close(); Serializable serialData = null; serialData = spm.readSwap(key); return serialData; } catch (FileNotFoundException ex) { ex.printStackTrace(); return null; } catch (IOException ex) { ex.printStackTrace(); return null; } catch (ClassNotFoundException ex) { ex.printStackTrace(); return null; } } /** the comparator doesn't get Serialized. use setComparator after a deserialization process! @see #setComparator(Comparator)*/ public transient Comparator<? super K> comparator = null; /***/ public void setComparator(Comparator<? super K> c) { comparator = c; } /***/ public Comparator<? super K> getComparator() { return comparator(); } /***/ public Comparator<? super K> comparator() { return comparator; } /***/ public SortedMap<K, V> subMap(K fromKey, K toKey) { Set<Map.Entry<K, V>> entries = entrySet(); SortedMap<K, V> sortedMap = new TreeMap<K, V>(comparator); synchronized (this) { for (Iterator<Map.Entry<K, V>> i = entries.iterator(); i.hasNext();) { Map.Entry<K, V> entry = i.next(); sortedMap.put(entry.getKey(), entry.getValue()); } notify(); } subMaps.add(new SoftReference(sortedMap, _cacheBack)); return sortedMap.subMap(fromKey, toKey); } /***/ public SortedMap<K, V> headMap(K headKey) { Set<Map.Entry<K, V>> entries = entrySet(); SortedMap<K, V> sortedMap = new TreeMap<K, V>(comparator); synchronized (this) { for (Iterator<Map.Entry<K, V>> i = entries.iterator(); i.hasNext();) { Map.Entry<K, V> entry = i.next(); sortedMap.put(entry.getKey(), entry.getValue()); } notify(); } subMaps.add(new SoftReference(sortedMap, _cacheBack)); return sortedMap.headMap(headKey); } /***/ public SortedMap<K, V> tailMap(K tailKey) { Set<Map.Entry<K, V>> entries = entrySet(); SortedMap<K, V> sortedMap = new TreeMap<K, V>(comparator); synchronized (this) { for (Iterator<Map.Entry<K, V>> i = entries.iterator(); i.hasNext();) { Map.Entry<K, V> entry = i.next(); sortedMap.put(entry.getKey(), entry.getValue()); } notify(); } subMaps.add(new SoftReference(sortedMap, _cacheBack)); return sortedMap.tailMap(tailKey); } /***/ public K firstKey() { Set<Map.Entry<K, V>> entries = entrySet(); SortedMap<K, V> sortedMap = new TreeMap<K, V>(comparator); synchronized (this) { for (Iterator<Map.Entry<K, V>> i = entries.iterator(); i.hasNext();) { Map.Entry<K, V> entry = i.next(); if (entry instanceof Map.Entry) { sortedMap.put(entry.getKey(), entry.getValue()); } } notify(); } subMaps.add(new SoftReference(sortedMap, _cacheBack)); return sortedMap.firstKey(); } /***/ public void setListCapacity(int listCapacity) { this.listCapacity = listCapacity; notifyEvent(CAPACITY_EXTENDED); } /***/ public int getListCapacity() { return listCapacity; } /***/ public K lastKey() { Set<Map.Entry<K, V>> entries = entrySet(); SortedMap<K, V> sortedMap = new TreeMap<K, V>(comparator); synchronized (this) { for (Iterator<Map.Entry<K, V>> i = entries.iterator(); i.hasNext();) { Map.Entry<K, V> entry = i.next(); sortedMap.put(entry.getKey(), entry.getValue()); } notify(); } subMaps.add(new SoftReference(sortedMap, _cacheBack)); return sortedMap.lastKey(); } /***/ public V putIfAbsent(K k, V v) { if (!has(k)) { return put(k, v); } else { return get(k); } } /***/ public boolean remove(Object k, Object v) { V current; if ((current = get(k)) != null) { if (!current.equals(v)) { remove(k); return true; } } return false; } /***/ public boolean replace(K key, V oldValue, V newValue) { V current = get(key); if (current != null) { if (current.equals(oldValue)) { put(key, newValue); return true; } } return false; } /***/ public V replace(K key, V value) { if (has(key)) { return put(key, value); } else { return null; } } /** We define our own subclass of SoftReference which contains
  • not only the value but also the key to make it easier to find
  • the entry in the HashMap after it's been garbage collected. */
class SoftValue<K, V> extends SoftReference<V> { final K key; // always make data member final /** Did you know that an outer class can access private data
  • members and methods of an inner class? I didn't know that!
  • I thought it was only the inner class who could access the
  • outer class's private information. An outer class can also
  • access private members of an inner class inside its inner
  • class. */
SoftValue(V k, K key, ReferenceQueue<? super V> q) { super(k, q); this.key = key; } } }

Conclusion :


devx.com ou ibm.com peuvent être utiles... Sinon le schéma est arrangé pour le besoin de récupération des references en cache lorsque le Garbage collector passe effacer les données reférencées pour libérer la mémoire allouée....
Utiliser un Timer pour régulièrement "nettoyer" le cache (cleanup()) et recharger (add(K,V)) avec les objets utilisés.
Des objets déjà en cache ne seront pas effacés s'il sont "solidement" référencés ailleurs (strong reference à l'inverse d'une soft reference).
www.developpez.com propose un tutoriel sur les Reference en JAVA. (un moteur de recherche et un forum sont disponibles gratuitement)

Codes Sources

A voir également

Ajouter un commentaire Commentaires
Messages postés
2113
Date d'inscription
samedi 8 novembre 2003
Statut
Contributeur
Dernière intervention
6 octobre 2012
11
Ma foi en générale je gere simplement mes buffers.. ;o) et je gere plusieurs 100ene de miliers d objets ;o) ... et jamais plus de quelques centaine en memoire.. ;o) ..

Ceci dit je regarderai à l occasion si tes class peuvent s adapter a un log de carto que j avai fait ;o) avec des images de plusieurs Go ;o) pôur voir si les perf sont meilleur que la soluce que j ai adopter ;o)
Messages postés
39
Date d'inscription
jeudi 17 août 2006
Statut
Membre
Dernière intervention
10 septembre 2007

eh ben essaie avec ce code si tu passes les out of memory ou non! Meme 1G0 de Ram ne permettent pas d'allouer autant d'objets variables java. C'est un peu pareil en C, mais Java simplifie la donne avec les References. Mon PC/Mac tourne avec 1Go de RAM, mais sans utiliser ce cache, il est impossible de charger toutes les resources des applications. Cela allant de 10 à 10'000 objets simultanément, l'idée d'avoir un "cache manager" devient essentielle. :D
Messages postés
2113
Date d'inscription
samedi 8 novembre 2003
Statut
Contributeur
Dernière intervention
6 octobre 2012
11
En meme ;o) temps le out of memory maintenant ;o) faut vraiment le vouloir ... tu allous 1Go au prog et ca te laisse pas mal de marge ;o).... (déjà explosé qd meme) ;o)

Franchement je la trouve bien complex pour si peu de chose ;o) ...
Mais cela merite d etre testé ;o) si il a vraiment une plusvalue de perf ;o) sur les grosse images !!!
Messages postés
39
Date d'inscription
jeudi 17 août 2006
Statut
Membre
Dernière intervention
10 septembre 2007

la ram elle tourne autour de la nanoseconde... donc si je n'active pas de swap disk ca devrait marcher a 1ms....
Messages postés
39
Date d'inscription
jeudi 17 août 2006
Statut
Membre
Dernière intervention
10 septembre 2007

c'est sur. j'utilise le png. maintenant je sais pas a quelle fréquence régler le timer du cache.... est-ce que le disque est indépendant du timer? un <<DD tourne à 9ms env.>> si le cleanup() intervient plus tot que la prochaine boucle de lecture/ecriture du DD ça fait quoi?
bref, si tu testes ce cache une fois j'attends un feedback.. :)
Afficher les 16 commentaires

Vous n'êtes pas encore membre ?

inscrivez-vous, c'est gratuit et ça prend moins d'une minute !

Les membres obtiennent plus de réponses que les utilisateurs anonymes.

Le fait d'être membre vous permet d'avoir un suivi détaillé de vos demandes et codes sources.

Le fait d'être membre vous permet d'avoir des options supplémentaires.