ICS 33 Fall 2025
Exercise Set 8 Solutions
Problem 1
There are multiple ways to make this work, but I've chosen to build a class implementing a decorator. To do that, the class fundamentally needs an __init__ method that accepts a function as an argument, which is what allows it to support the behind-the-scenes f = class_method(f) syntax that would arise from writing @class_method def f(cls, x, y):.
But there's one more problem to solve, which is that the resulting function needs not to become a bound method when we look it up in the class dictionary (i.e., we need it not to have a self, but we instead need it to have a cls). We can do that by making a class_method also be a descriptor, so we can override the usual behavior of turning a function in a class into a bound method. If it's a descriptor, it can have a __set_name__ method, which will allow us to easily find out what class the method was defined in.
Putting those ideas together leads to the following code, which is one way to solve the problem.
class class_method:
def __init__(self, func):
self._func = func
def _execute(self, *args, **kwargs):
return self._func(self._cls, *args, **kwargs)
def __set_name__(self, cls, name):
self._cls = cls
def __get__(self, obj, objtype):
return self._execute
Another solution that would work is to write a function that builds this descriptor instead, which we might write this way.
class _ClassMethodDescriptor:
def __init__(self, func):
self._func = func
def _execute(self, *args, **kwargs):
return self._func(self._cls, *args, **kwargs)
def __set_name__(self, cls, name):
self._cls = cls
def __get__(self, obj, objtype = None):
return self._execute
def class_method(func):
return _ClassMethodDescriptor(func)
If the only thing the class_method function does is create an object of a class by passing its own parameters as arguments to initialize it, there's no reason the class can't simply take the place of the function; all we've done is write a thin wrapper over what would otherwise be a call to the __init__ method, anyway. (What we're starting to see is that the difference between a function and a class is a little less stark than we might have thought a quarter ago.)
Problem 2
The most common use for __getattr__ is to allow us to provide what we might call phantom attributes, which is to say that an object can give us access to attributes that it doesn't actually store — or, at least, that it doesn't store in the same location in which we asked for them. These phantom attributes might reasonably be assumed to be accompanied by normal ones; in other words, the usual case would be an object that has a combination of both normal and phantom attributes. If __getattr__ preceded the usual attribute lookup, it would get in the way of the lookup of normal attributes. That's not to say that we couldn't work around this, e.g., by calling super.__getattr__ in these cases, but it would make the ordinary situation clunky. Better for a design to accommodate the usual case as simply as possible, and leave the special handling for the out-of-the-ordinary cases instead. Consequently, __getattr__ is only called when normal attribute lookup fails.
On the other hand, mutating an object's attributes, either by setting their values or deleting them, is something that we might like to override even for attributes that already have values. For example, we can use __setattr__ to establish an attribute's immutability, even if we haven't written a custom __getattr__ method to obtain its value. That trick only works if __setattr__ precedes any attempt to set an attribute's value, whether it exists in an object's dictionary or not.
It's worth noting that there's another dunder method, __getattribute__, that behaves more like __setattr__ and __delattr__: It's called before attempting to look up an attribute's value in the usual places (e.g., the object's dictionary, the dictionary belonging to the object's class, and so on). So, if you really need this ability — if you want to customize whether attributes are looked up in the usual way at all — there's a way to get it. Since this need is the exception rather than the rule, it has a different name than the usual __getattr__.
Problem 3
class LimitedString:
def __init__(self, max_length, *, can_delete = True):
self._max_length = max_length
self._can_delete = can_delete
def __set_name__(self, cls, name):
self._attribute_name = f'_{name}'
def __get__(self, obj, objtype = None):
if obj is not None:
return getattr(obj, self._attribute_name)
else:
return self
def __set__(self, obj, value):
if obj is None:
return
elif type(value) is not str:
raise ValueError('not a string, but must be')
elif len(value) > self._max_length:
raise ValueError(f'length exceeds maximum length limit of {self._max_length}')
else:
setattr(obj, self._attribute_name, value)
def __delete__(self, obj):
if obj is None:
return
elif self._can_delete:
delattr(obj, self._attribute_name)
else:
raise AttributeError('attribute cannot be deleted')
Problem 4
There is obviously no single solution that is applicable to everyone, since an answer to this question involves discussing one's personal point of view. But, since I've asked you to present your own feelings on the matter, it's only fair that I present mine, which I'll do by answering a slightly different question that's more applicable to my own level of exerience entering my first complete offering of this course: a much different version offered many years ago, which has now been almost totally replaced by this one. (I'll apologize in advance for writing a long answer — you all know me well enough to expect that by now — but I'll try to avoid it being gratuitously so.)
While my earliest memories of programming as a kid revolve around my own relatively unguided journeys into writing BASIC programs on early-1980s personal computers, the first significant traction I gained was when I finally had a full-fledged guide while taking an AP Computer Science course in high school around 1990, which was taught using a language called Pascal. Among Pascal's design decisions was the need to declare variables before their use, which is to say that you had to specify that you wanted a variable before you could assign a value to it; when you did so, you also established its type, and any subsequent attempt to assign a value of some other type into that variable would be reported as an error before the program ran. Similarly, when you defined a function, you'd specify types for its parameters and its result; subsequent calls would be required to respect those types, as well, or else the program couldn't run at all. A function in that fairly ancient version of Pascal — to the best of my recollection! — looked like this.
function rightTrianglePerimeter(width: real, height: real): real;
var
hypotenuse: real; (* This is a variable declaration *)
begin
hypotenuse := sqrt(width * width + height * height); (* This is an assignment statement *)
rightTrianglePerimeter := width + height + hypotenuse; (* This specifies the return value *)
end;
My first-year computer science courses at UCI (where I did my undergraduate work in the early-to-mid 1990s) were also taught using Pascal, by which point I had solidified in my head a preference for mistakes to be caught before a program runs, because I had become quite accustomed to having that safety net underneath me while I worked. When it comes to types (e.g., not being able to store a string in a variable whose type is integer), we refer to that safety net as static type checking. Static type checking was the fashionable default in that era, as much because the relative lack of computing resources — much slower machines with much less memory, relative to today — made compiled languages a much more preferable choice than interpreted ones as any other reason; there was a much wider range of programs for which precompilation based on up-front knowledge of types was a necessity if you wanted them to be performant enough.
In the years since, I've written programs — both for fun and professionally — in many languages that offer that same ability (e.g., C++, Java, and C#), but also in languages that are more like Python (e.g., JavaScript, Erlang, a variety of shell scripting languages, and indeed a little bit of Python) where most mistakes surface only when a program runs. As the years and languages have stacked up, my basic preference hasn't changed, but has instead only grown stronger: The sooner I know something is wrong, the better. If I can know something is wrong before a program runs, that's the best outcome. If I can't, I want the error to happen at the point where the mistake has been made, not when some loosely related part of the program downstream fails instead. While I'd like to say that it's a reasoned point of view constructed carefully from varied experience, I can't rule out that it's the product of a bias introduced by my first serious learning experiences; I'll honestly never know.
Meanwhile, the broader programming community has seemed to be on a bit of a pendulum during those years. In my very early programming career, not long after graduating from UCI in the mid-1990s, statically typed languages were all the rage. Ten or fifteen years later, the pendulum had swung, with a lot of folks writing even back-end code in dynamically typed languages like Python and JavaScript, which were seen as offering a level of flexibility missing from the stodgy statically typed languages with their rigid and sometimes clunky type systems. Within another ten or fifteen years, the pendulum had swung back the other way, with many of those dynamically typed languages having pre-processors — Flow and later TypeScript atop JavaScript, MyPy atop Python — that performed the same kinds of ahead-of-time type checking that the "cool kid" languages of my youth did, albeit with richer type systems than those old-school languages offered; even the statically typed languages have type systems that are more expressive than the ones from days gone by. I don't know whether that vindicates my preferences or not, but the point is that fashions change, and, in this line of work, that's usually driven by some kind of pain causing a group of people to say "We need something better." I've found it enormously interesting to watch that play out over a long period of time.
So, it should come as no surprise when I describe my feelings about Python when I knew only as much about it as we teach in ICS 32. My view is that it was a language that lacked the tools to properly design a program. With its tolerance for re-assigning anything, anywhere, at any time, that seemed like it could only mean that chaos would reign, with no means offered to control it. I could see writing small or short-lived programs in it — and I've been automating one-off tasks related to managing my courses in Python for years, long before we taught it in our introductory courses — but couldn't imagine writing larger-scale systems with it. If every mistake was doomed to be a run-time error, with many of those only arising downstream from the original mistake, how could a large-sized program be written and debugged? And, even if you could make it work, how could you ever modify it with any level of confidence? (Of course, part of the answer to that question lies in unit testing, but there are only so many unit tests we can write; when anything is possible, we surely can't test everything.)
By the time I had dug all the way through the things that I'm now teaching in ICS 33, my point of view had become more nuanced. When it comes to Python's seemingly "anything goes" flexibility, the truth of the matter is not quite as it initially appears. With almost every part of Python's internals being customizable, we can add features to our programs more easily, but we can also add customizations that surface our mistakes earlier. We can write classes whose objects can be prevented from ever being in invalid states, so that any attempt to make them invalid can fail immediately. We can turn at least some design mistakes into problems that surface at the time a module is loaded, rather than when one of its functions is invoked. Failure of a program at the point where something is wrong beats downstream failure every time, in my view, so being able to write tools that behave that way is something I certainly prefer.
Meanwhile, Python's type system continues to evolve, as well — most of that lies beyond the scope of ICS 33, though it might be a good thing to spend some time learning, if you want to contine building your Python knowledge going forward. The vocabulary and syntax we can use in type annotations is expanding in every release, alongside the ongoing development of tools like MyPy that are able to pre-process Python programs and validate that those annotations are actually respected; for example, if we annotate that a function f has one parameter whose type is int, a tool like MyPy will be able to deduce that the calls f('Hello'), f(15.0), or f() are erroneous before the program runs. As those abilities continue to evolve, we'll have the option of treating a Python program gradually more like the statically checked languages I've preferred since my youth.
There are still languages I'd prefer to use than Python, especially when it comes to large-scale system building, though those preferences are less visceral than they were when I had only scratched Python's surface. Languages are less important than what we can do with them. If there are ten languages in which we can write a system that's understandable, that's maintainable and extensible enough, that's performant enough, and that has a sufficient level of automated testing, most of how we choose between those ten boils down to the preferences of the people doing the work (or other characteristics, like a need to integrate with an existing code base that's already written in one of those ten languages). What has changed about my view of Python as I've learned more of it is the breadth of the situations in which I believe it's in that mix as a viable option.