This commit updates the lock function from Spinlock and
RecursiveSpinlock to return the InterruptsState of the processor,
instead of the processor flags. The unlock functions would only look at
the interrupt flag of the processor flags, so we now use the
InterruptsState enum to clarify the intent, and such that we can use the
same Spinlock code for the aarch64 build.
To not break the build, all the call sites are updated aswell.
Globally shared MemoryManager state is now kept in a GlobalData struct
and wrapped in SpinlockProtected.
A small set of members are left outside the GlobalData struct as they
are only set during boot initialization, and then remain constant.
This allows us to access those members without taking any locks.
Until now, our kernel has reimplemented a number of AK classes to
provide automatic internal locking:
- RefPtr
- NonnullRefPtr
- WeakPtr
- Weakable
This patch renames the Kernel classes so that they can coexist with
the original AK classes:
- RefPtr => LockRefPtr
- NonnullRefPtr => NonnullLockRefPtr
- WeakPtr => LockWeakPtr
- Weakable => LockWeakable
The goal here is to eventually get rid of the Lock* classes in favor of
using external locking.
All users which relied on the default constructor use a None lock rank
for now. This will make it easier to in the future remove LockRank and
actually annotate the ranks by searching for None.
Note that SMP is still off by default, but this basically removes the
weird "SMP on but threads don't get scheduled" behavior we had by
default. If you pass "smp=on" to the kernel, you now get SMP. :^)
Each of these strings would previously rely on StringView's char const*
constructor overload, which would call __builtin_strlen on the string.
Since we now have operator ""sv, we can replace these with much simpler
versions. This opens the door to being able to remove
StringView(char const*).
No functional changes.
This change unifies the naming convention for kernel tasks.
The goal of this change is to:
- Make the task names more descriptive, so users can more
easily understand their purpose in System Monitor.
- Unify the naming convention so they are consistent.
At the end of sys$execve(), we perform a context switch from the old
executable into the new executable.
However, the Kernel::Thread object we are switching to is the *same*
thread as the one we are switching from. So we must not assume the
from_thread and to_thread are different threads.
We had a bug caused by this misconception, where the "from" thread would
always get marked as "inactive" when switching to a new thread.
This meant that threads would always get switched into "inactive" mode
on first context switch into them.
If a thread then tried blocking on a kernel mutex within its first time
slice, we'd end up in Thread::block(Mutex&) with an inactive thread.
Once a thread is inactive, the scheduler believes it's okay to
reactivate the thread (by scheduling it.) If a thread got re-scheduled
prematurely while setting up a mutex block, things would fall apart and
we'd crash in Thread::block() due to the thread state being "Runnable"
instead of the expected "Running".
Move this architecture-specific sanity check (IOPL must be 0) out of
Scheduler and into the x86 enter_thread_context(). Also do this for
every thread and not just userspace ones.
It was annoyingly hard to spot these when we were using them with
different amounts of qualification everywhere.
This patch uses Thread::State::Foo everywhere instead of Thread::Foo
or just Foo.
Signal dispatch is already taken care of elsewhere, so there appears to
be no need for the hack in enter_current().
This also allows us to remove the Thread::m_in_block flag, simplifying
thread blocking logic somewhat.
Verified with the original repro for #4336 which this was meant to fix.
In order to reduce our reliance on __builtin_{ffs, clz, ctz, popcount},
this commit removes all calls to these functions and replaces them with
the equivalent functions in AK/BuiltinWrappers.h.
We previously allowed Thread to exist in a state where its m_name was
null, and had to work around that in various places.
This patch removes that possibility and forces those who would create a
thread (or change the name of one) to provide a NonnullOwnPtr<KString>
with the name.
This avoids a race between getting the processor-specific SchedulerData
and accessing it. (Switching to a different CPU in that window means
that we're operating on the wrong SchedulerData.)
Co-authored-by: Tom <tomut@yahoo.com>
This patch does three things:
- Convert the global thread list from a HashMap to an IntrusiveList
- Combine the thread list and its lock into a SpinLockProtectedValue
- Customize Thread::unref() so it locks the list while unreffing
This closes the same race window for Thread as @sin-ack's recent changes
did for Process.
Note that the HashMap->IntrusiveList conversion means that we lose O(1)
lookups, but the majority of clients of this list are doing traversal,
not lookup. Once we have an intrusive hashing solution, we should port
this to use that, but for now, this gets rid of heap allocations during
a sensitive time.
By making these functions static we close a window where we could get
preempted after calling Processor::current() and move to another
processor.
Co-authored-by: Tom <tomut@yahoo.com>
To add a new per-CPU data structure, add an ID for it to the
ProcessorSpecificDataID enum.
Then call ProcessorSpecific<T>::initialize() when you are ready to
construct the per-CPU data structure on the current CPU. It can then
be accessed via ProcessorSpecific<T>::get().
This patch replaces the existing hard-coded mechanisms for Scheduler
and MemoryManager per-CPU data structure.
Depending on the values it might be difficult to figure out whether a
value is decimal or hexadecimal. So let's make this more obvious. Also
this allows copying and pasting those numbers into GNOME calculator and
probably also other apps which auto-detect the base.