Forums

process getting killed before 3GB RAM

Hi Team,

my process are getting killed , i did trace and its happeneing when usage is much less only 78 MB

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy self._setitem_single_block(indexer, value, name) Memory used: 77.80 MB

Console closed.

this is happening when i am running it on console as well as when its executed as scheduled process. is this steeling processing power from one user and selling it may be to someone else. i am getting email telling that my process has hit 3GB limit . this is light script. please fix this asap

Let's keep this to the email thread with support to avoid confusion

no problem, please ensure consistency, either i have to be fully aware of calculation that killed my script today multiple times or you need to correct the killing process to ensure really 3 GB or beyond is killed. without consistency it would become impossible to have trust to run anything that is materially important to me on this platform, i would always be in doubt weather my scheduled tasks are going to run or not, python anwhere is part of my long term plan of bringing some more usecases forward in future, just ensure this is addressed before monday, happy to answer anything over email.

It is consistent. There is no calculation involved. If the process goes over 3G of resident memory, it is killed.